can not run with vllm

#23
by tankpigg - opened

I have done this: pip install git+https://github.com/huggingface/[email protected]
and
pip freeze |grep transformers shows: transformers @ git+https://github.com/huggingface/transformers@46350f5eae87ac1d168ddfdc57a0b39b64b9a029
my vllm is Version: 0.7.3
I try to run the model with command: vllm serve gemma3_27b --served-model-name gemma3_27b --tensor-parallel-size 4 --trust-remote-code --gpu_memory_utilization 0.95 --dtype bfloat16,then, an Error occurred :
'''ValueError: Unrecognized configuration class <class 'transformers.models.gemma3.configuration_gemma3.Gemma3Config'> for this kind of AutoModel: AutoModel. ......GemmaConfig, Gemma2Config, Gemma3TextConfig, GitConfig, GlmConfig, .........'''
The Gemma3TextConfig can be seen in the error message,but there is not Gemma3Config .
So I manually add ("gemma3", "Gemma3ForCausalLM") to anaconda3/lib/python3.11/site-packages/transformers/models/auto/modeling_auto.py:line 121,
It seems Gemma3Config can be found this time, but a new error occurred:
Process SpawnProcess-1:
Traceback (most recent call last):
File "/home/user/anaconda3/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/user/anaconda3/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 402, in run_mp_engine
raise e
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 124, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 76, in init
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 273, in init
self.model_executor = executor_class(vllm_config=vllm_config, )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 271, in init
super().init(*args, **kwargs)
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in init
self._init_executor()
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor
self._run_workers("load_model",
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers
driver_worker_output = run_method(self.driver_worker, sent_method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/utils.py", line 2196, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model
self.model_runner.load_model()
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
self.model = get_model(vllm_config=self.vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
return loader.load_model(vllm_config=vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 406, in load_model
model = _initialize_model(vllm_config=vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model
return model_class(vllm_config=vllm_config, prefix=prefix)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/vllm/model_executor/models/transformers.py", line 138, in init
self.model: PreTrainedModel = AutoModel.from_config(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 440, in from_config
return model_class._from_config(config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/transformers/modeling_utils.py", line 273, in _wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/transformers/modeling_utils.py", line 1596, in _from_config
model = cls(config, kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 889, in init
self.model = Gemma3TextModel(config)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 626, in init
config.vocab_size, config.hidden_size, self.padding_idx, embed_scale=self.config.hidden_size
0.5
^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/lib/python3.11/site-packages/transformers/configuration_utils.py", line 214, in getattribute
return super().getattribute(key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Gemma3Config' object has no attribute 'hidden_size'

does anybody can help, thanks a lot!

Try the following:

git clone https://github.com/vllm-project/vllm.git
cd vllm
VLLM_USE_PRECOMPILED=1 pip install --editable .

pip install git+https://github.com/huggingface/[email protected]

https://github.com/vllm-project/vllm/issues/14734#issuecomment-2720738129

Try the following:

git clone https://github.com/vllm-project/vllm.git
cd vllm
VLLM_USE_PRECOMPILED=1 pip install --editable .

pip install git+https://github.com/huggingface/[email protected]

https://github.com/vllm-project/vllm/issues/14734#issuecomment-2720738129

It solved my problem! Thank you for your help!

ModuleNotFoundError: No module named 'vllm.benchmarks.serve'
Need to do :
echo 'export PYTHONPATH="/your/vllmflod/path:$PYTHONPATH"' >> ~/.bashrc
source ~/.bashrc

tankpigg changed discussion status to closed
tankpigg changed discussion status to open
This comment has been hidden (marked as Resolved)
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment