Optimum.habana error None is not a local path

I am getting a None is not a local path or a model identifier on the model Hub error when I try to start training with the optimum.habana library. Wondered if I could get some help on it. Here is code and error:

Code:


from transformers import DistilBertConfig, DistilBertTokenizerFast, DistilBertForSequenceClassification
# from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments #for Habana version of transformers
from optimum.habana import GaudiTrainer, GaudiTrainingArguments #for Huggingface version

training_args = GaudiTrainingArguments(
    use_habana=True,
    use_lazy_mode=True,
    output_dir='./results',          # output directory
    num_train_epochs=3,              # total number of training epochs
    per_device_train_batch_size=128,  # batch size per device during training
    per_device_eval_batch_size=128,   # batch size for evaluation
    warmup_steps=500,                # number of warmup steps for learning rate scheduler
    weight_decay=0.01,               # strength of weight decay
    logging_dir='./logs',            # directory for storing logs
    logging_steps=10,
    report_to='all'
)
import os
if not os.path.isdir("./results/checkpoint-3500"):
    model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
    model.to(device)
    trainer = GaudiTrainer(
    model=model,                         # the instantiated Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=val_dataset             # evaluation dataset
    )
    trainer.train()
else:
    model = DistilBertForSequenceClassification.from_pretrained("./results/checkpoint-3500")

Error:

Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_projector.bias', 'vocab_transform.bias', 'vocab_projector.weight', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_transform.weight']
- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'classifier.weight', 'classifier.bias', 'pre_classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
RepositoryNotFoundError                   Traceback (most recent call last)
File /usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py:836, in get_list_of_files(path_or_repo, revision, use_auth_token, local_files_only)
    835 try:
--> 836     return list_repo_files(path_or_repo, revision=revision, token=token)
    837 except HTTPError as e:

File /usr/local/lib/python3.8/dist-packages/huggingface_hub/hf_api.py:1334, in HfApi.list_repo_files(self, repo_id, revision, repo_type, token, timeout)
   1312 """
   1313 Get the list of files in a given repo.
   1314 
   (...)
   1332     `List[str]`: the list of files in a given repository.
   1333 """
-> 1334 repo_info = self.repo_info(
   1335     repo_id,
   1336     revision=revision,
   1337     repo_type=repo_type,
   1338     token=token,
   1339     timeout=timeout,
   1340 )
   1341 return [f.rfilename for f in repo_info.siblings]

File /usr/local/lib/python3.8/dist-packages/huggingface_hub/hf_api.py:1289, in HfApi.repo_info(self, repo_id, revision, repo_type, token, timeout)
   1288 if repo_type is None or repo_type == "model":
-> 1289     return self.model_info(
   1290         repo_id, revision=revision, token=token, timeout=timeout
   1291     )
   1292 elif repo_type == "dataset":

File /usr/local/lib/python3.8/dist-packages/huggingface_hub/hf_api.py:1136, in HfApi.model_info(self, repo_id, revision, token, timeout, securityStatus)
   1133 r = requests.get(
   1134     path, headers=headers, timeout=timeout, params=status_query_param
   1135 )
-> 1136 _raise_for_status(r)
   1137 d = r.json()

File /usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_errors.py:78, in _raise_for_status(request)
     76 if request.status_code == 401:
     77     # The repo was not found and the user is not Authenticated
---> 78     raise RepositoryNotFoundError(
     79         f"401 Client Error: Repository Not Found for url: {request.url}. If the"
     80         " repo is private, make sure you are authenticated. (Request ID:"
     81         f" {request_id})"
     82     )
     84 _raise_with_request_id(request)

RepositoryNotFoundError: 401 Client Error: Repository Not Found for url: https://huggingface.co/api/models/None. If the repo is private, make sure you are authenticated. (Request ID: aHGbVXZi6yrQUYVhwq7a7)

The above exception was the direct cause of the following exception:

ValueError                                Traceback (most recent call last)
Input In [49], in <cell line: 2>()
      3     model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
      4     model.to(device)
----> 5     trainer = GaudiTrainer(
      6     model=model,                         # the instantiated Transformers model to be trained
      7     args=training_args,                  # training arguments, defined above
      8     train_dataset=train_dataset,         # training dataset
      9     eval_dataset=val_dataset             # evaluation dataset
     10     )
     11     trainer.train()
     12 else:

File /usr/local/lib/python3.8/dist-packages/optimum/habana/trainer.py:148, in GaudiTrainer.__init__(self, model, gaudi_config, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)
    133 super().__init__(
    134     model,
    135     args,
   (...)
    144     preprocess_logits_for_metrics,
    145 )
    147 if gaudi_config is None:
--> 148     self.gaudi_config = GaudiConfig.from_pretrained(args.gaudi_config_name)
    149 else:
    150     self.gaudi_config = copy.deepcopy(gaudi_config)

File /usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py:534, in PretrainedConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    457 @classmethod
    458 def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
    459     r"""
    460     Instantiate a [`PretrainedConfig`] (or a derived class) from a pretrained model configuration.
    461 
   (...)
    532     assert unused_kwargs == {"foo": False}
    533     ```"""
--> 534     config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
    535     if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
    536         logger.warning(
    537             f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
    538             f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
    539         )

File /usr/local/lib/python3.8/dist-packages/optimum/configuration_utils.py:179, in BaseConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    177     config_file = pretrained_model_name_or_path
    178 else:
--> 179     configuration_file = cls.get_configuration_file(
    180         pretrained_model_name_or_path,
    181         revision=revision,
    182         use_auth_token=use_auth_token,
    183         local_files_only=local_files_only,
    184     )
    186     if os.path.isdir(pretrained_model_name_or_path):
    187         config_file = os.path.join(pretrained_model_name_or_path, configuration_file)

File /usr/local/lib/python3.8/dist-packages/optimum/configuration_utils.py:115, in BaseConfig.get_configuration_file(cls, path_or_repo, revision, use_auth_token, local_files_only)
     95 """
     96 Get the configuration file to use for this version of transformers.
     97 
   (...)
    112     :obj:`str`: The configuration file to use.
    113 """
    114 # Inspect all files from the repo/folder.
--> 115 all_files = get_list_of_files(
    116     path_or_repo, revision=revision, use_auth_token=use_auth_token, local_files_only=local_files_only
    117 )
    118 configuration_files_map = {}
    119 _re_configuration_file = cls._re_configuration_file()

File /usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py:838, in get_list_of_files(path_or_repo, revision, use_auth_token, local_files_only)
    836     return list_repo_files(path_or_repo, revision=revision, token=token)
    837 except HTTPError as e:
--> 838     raise ValueError(
    839         f"{path_or_repo} is not a local path or a model identifier on the model Hub. Did you make a typo?"
    840     ) from e

ValueError: None is not a local path or a model identifier on the model Hub. Did you make a typo?

Appreciate your help!

Thanks for the post.

The error is because we have not specified the hugging face model card, so it sets it to None. Here you can find the available models.

2 Ways of doing it:

Here is the expected changes required. Note that gaudi_config_name=gaudi_config_name, is specified in GaudiTrainingArguments. For example you can add: gaudi_config_name='Habana/distilbert-base-uncased'

You can also do:

gaudi_config = GaudiConfig.from_pretrained(
        'Habana/distilbert-base-uncased',
...
    )
trainer = GaudiTrainer(
    model=model,                         # the instantiated Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=val_dataset             # evaluation dataset
    gaudi_config=gaudi_config
    )

Here’s an example.

Thanks