--- license: apache-2.0 base_model: - unsloth/gemma-3-4b-it-unsloth-bnb-4bit - mudasir13cs/Field-adaptive-query-generator tags: - text-generation - gemma - presentation-templates - information-retrieval - field-adaptive - query-generation - search-queries datasets: - cyberagent/crello language: - en --- # Field-Adaptive Query Generator A fine-tuned text generation model for generating diverse and relevant search queries from presentation template metadata. This model uses LoRA adapters to efficiently fine-tune Google Gemma-3-4B-IT for generating search queries as part of the Field-Adaptive Dense Retrieval framework. ## Model Description This model generates 8 different search queries from presentation template metadata including titles, descriptions, industries, categories, and tags. It serves as a key component in the Field-Adaptive Dense Retrieval system for structured documents. **Base Model:** `unsloth/gemma-3-4b-it-unsloth-bnb-4bit` **Model Type:** Causal Language Model with LoRA **Language:** English **License:** Apache 2.0 ## Usage ### With Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "mudasir13cs/Field-adaptive-query-generator" ) tokenizer = AutoTokenizer.from_pretrained( "mudasir13cs/Field-adaptive-query-generator" ) # Format prompt using Gemma chat template prompt = """user Generate 8 different search queries that users might use to find this presentation template: Title: Modern Business Presentation Description: This modern business presentation template features a minimalist design... Industries: Business, Marketing Categories: Corporate, Professional Tags: Modern, Clean, Professional model """ inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=512, temperature=0.7, do_sample=True) generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(generated_text) ``` ### With llama.cpp ```bash # Download the GGUF model huggingface-cli download mudasir13cs/Field-adaptive-query-generator-gguf \ query-generator-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False # Run inference ./llama-cli -m query-generator-q4_k_m.gguf \ -p "user Generate 8 different search queries that users might use to find this presentation template: Title: Modern Business Presentation Description: This modern business presentation template features a minimalist design... Industries: Business, Marketing Categories: Corporate, Professional Tags: Modern, Clean, Professional model " ``` ### With Ollama ```bash # Import model to Ollama ollama create field-adaptive-query-generator -f Modelfile # Run inference ollama run field-adaptive-query-generator "user Generate 8 different search queries that users might use to find this presentation template: Title: Modern Business Presentation Description: This modern business presentation template features a minimalist design... Industries: Business, Marketing Categories: Corporate, Professional Tags: Modern, Clean, Professional model " ``` ## Expected Output Format The model generates exactly 8 queries, one per line, with no numbering or bullets: ``` business presentation template modern corporate slides professional marketing presentation blue gradient business template minimalist corporate design marketing pitch template geometric business slides clean professional presentation ``` ## Prompt Format Always use the Gemma chat template format: ``` user Generate 8 different search queries that users might use to find this presentation template: Title: [Template Title] Description: [Template Description] Industries: [Industry1, Industry2] Categories: [Category1, Category2] Tags: [Tag1, Tag2, Tag3] Include a mix of: - Short queries (2-3 words) - Medium queries (4-6 words) - Natural language queries - Industry-specific queries - Use-case based queries - Style-based queries Format: Return exactly 8 queries, one per line, no numbering or bullets. model ``` ## Model Details - **Architecture:** Google Gemma-3-4B-IT with LoRA adapters - **Training:** Parameter-Efficient Fine-Tuning (PEFT) with LoRA - **LoRA Rank:** 16 - **LoRA Alpha:** 32 - **Training Epochs:** 3 - **Learning Rate:** 2e-4 - **Batch Size:** 4 ## Evaluation - **BLEU Score:** ~0.75 - **ROUGE Score:** ~0.80 - **Performance:** Optimized for query generation quality in structured document retrieval ## Citation ### Paper ```bibtex @article{field_adaptive_dense_retrieval, title={Field-Adaptive Dense Retrieval of Structured Documents}, author={Mudasir Syed}, journal={DBPIA}, year={2024}, url={https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE12352544} } ``` ### Model ```bibtex @misc{field_adaptive_query_generator, title={Field-adaptive-query-generator for Presentation Template Query Generation}, author={Mudasir Syed}, year={2024}, howpublished={Hugging Face}, url={https://huggingface.co/mudasir13cs/Field-adaptive-query-generator} } ``` ### Base Model ```bibtex @misc{gemma_3_4b_it, title={Gemma: Open Models Based on Gemini Research and Technology}, author={Gemma Team and others}, year={2024}, howpublished={Hugging Face}, url={https://huggingface.co/google/gemma-3-4b-it} } ``` ## Related Models - [Field-Adaptive Description Generator](https://huggingface.co/mudasir13cs/Field-adaptive-description-generator) - Generates descriptions from template metadata ## Author Mudasir Syed (mudasir13cs) - GitHub: https://github.com/mudasir13cs - HuggingFace: https://huggingface.co/mudasir13cs - LinkedIn: https://pk.linkedin.com/in/mudasir-sayed