If you index different documents at different point of time. We end up with multiple artifacts in the output folder. How should one do a search over all outputs. Like a production level application
Did you override the default entity_types to extract entities specific to your domain? there are default entity_types ( [organization,person,geo,event]) defined in settings.yaml....
I just wonder if it is possible to use local embeddings running on a separate machine? By the way, it took you one hour to 'graph' your txt file on 4090, what do you think it might take on h100?
I was able to setup a proxy and use local embeddings from ollama (proxy.py script in github repo) and it worked for Global configs but not Local; I fell back to openAI for embeddings but they are more or less 'free' anyway Not sure on an H100 or A100 card I will try to find one and test
Dude, it is not free by any measure! The number of calls it makes to the OpenAI service for generating graph means costs quickly mount. It is misleading to call this free for anything but toy problems. There is currently no way to get this to work fully replacing OpenAI models with local and open-source models. Please avoid misleading language in otherwise good video.
If you index different documents at different point of time. We end up with multiple artifacts in the output folder.
How should one do a search over all outputs. Like a production level application
Did you override the default entity_types to extract entities specific to your domain? there are default entity_types ( [organization,person,geo,event]) defined in settings.yaml....
Not for this use case - but if you have more documentation about these settings that would be wonderful
I have a php code project. How can I use graphRag for asking code questions and make documentation out of it.
I just wonder if it is possible to use local embeddings running on a separate machine? By the way, it took you one hour to 'graph' your txt file on 4090, what do you think it might take on h100?
I was able to setup a proxy and use local embeddings from ollama (proxy.py script in github repo) and it worked for Global configs but not Local; I fell back to openAI for embeddings but they are more or less 'free' anyway
Not sure on an H100 or A100 card I will try to find one and test
Dude, it is not free by any measure! The number of calls it makes to the OpenAI service for generating graph means costs quickly mount. It is misleading to call this free for anything but toy problems. There is currently no way to get this to work fully replacing OpenAI models with local and open-source models. Please avoid misleading language in otherwise good video.
Did you watch the video? It is can be done for free with ollama
what is the issue here "Error during GraphRAG setup: Command 'python -m graphrag.index --root ./ragtest' returned non-zero exit status 1.."