Thanks for producing this video. I did not realize that GraphDB was so advanced. This feature is very relevant since I have a process to turn all of my data into RDF/TTL. I now wonder how GraphDB compares to Neo4j and AWS Neptune. I have a dataset that updates monthly that is roughly 700MB in Neo4j. It would be interesting to compare technologies.
I know Neo just came out with a csv to graph library and they also have neosemantics that converts to RDF. Neptune doesn’t have a native data science studio but they obviously have sagemaker and all the other ML tools that you can use in their suite. And yah, I have wanted to do a side by side with the same dataset and queries over all the graph databases I have reviewed but I just haven’t found the time yet!
@@AshleighFaith Hi. In my situation, the source files are compressed S3 objects. Our infrastruture policy did not allow creating a large temporary EBS volume to hold a copy of the data. Therefore, I wrote a python script to read the files in S3, then pump the data into Neo4j. This approach traded local volume size for import speed. This comment is apropos of nothing. I just wanted to vent. :)
HI Ashleigh, as you did with your configuration because I had a problem with the configuration.
Hi Ashleigh, I'm the VP of Sales for an emerging Knowledge Graph company, Kobai.. Appreciate your channel.
Thanks for producing this video. I did not realize that GraphDB was so advanced. This feature is very relevant since I have a process to turn all of my data into RDF/TTL. I now wonder how GraphDB compares to Neo4j and AWS Neptune. I have a dataset that updates monthly that is roughly 700MB in Neo4j. It would be interesting to compare technologies.
I know Neo just came out with a csv to graph library and they also have neosemantics that converts to RDF. Neptune doesn’t have a native data science studio but they obviously have sagemaker and all the other ML tools that you can use in their suite.
And yah, I have wanted to do a side by side with the same dataset and queries over all the graph databases I have reviewed but I just haven’t found the time yet!
@@AshleighFaith Hi. In my situation, the source files are compressed S3 objects. Our infrastruture policy did not allow creating a large temporary EBS volume to hold a copy of the data. Therefore, I wrote a python script to read the files in S3, then pump the data into Neo4j. This approach traded local volume size for import speed. This comment is apropos of nothing. I just wanted to vent. :)
@DavidMedinets we all need to do that sometimes! But I’m glad you shared your work-around because that’s the real value of sharing knowledge.