Browser extension frontend?

#1
by pokexpert - opened

Greetings.
First thing i thought while reading the model card, is it would be amazing if a browser extension would trigger on images to be sent to a remote paddleocr endpoint

yes, that would be really helpful for the end user to use this model, but it requires someone to remotely serve this model.
for now, I am looking for some efficient solution to run this model locally, llama.cpp would be perfect if they support PaddleOCR-VL, but sadly they decided not to include PaddleOCR-VL in the attempt : https://github.com/ggml-org/llama.cpp/pull/16701
For now, running this model locally using transformers is possible in a modern laptop. Would love see someone create a dedicated client for this model.

Sign up or log in to comment