Skip to content

Commit 7666e20

Browse files
committed
[CoreEngine] fixed the issue when the product-id file is limited to access due to the permission.
1 parent e0c6042 commit 7666e20

File tree

5 files changed

+13
-9
lines changed

5 files changed

+13
-9
lines changed

devops/k8s/README_MODEL_SERVING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ Moreover, on GCP k8s cluster, you should set up your GPU nodes based on the foll
101101
After you have installed FedML model serving packages, you may run the helm upgrade commands to modify parameters.
102102

103103
e.g.
104-
```helm upgrade --set "autoscaling.enabled=true" --set replicaCount=$InstanceNumber fedml-model-premise-master fedml-model-premise-master-0.7.397.tgz -n $YourNameSpace```
104+
```helm upgrade --set "autoscaling.enabled=true" --set replicaCount=$InstanceNumber fedml-model-premise-master fedml-model-premise-master-latest.tgz -n $YourNameSpace```
105105

106106
### 6). Config your CNAME record in your DNS provider (Godaddy, wordpress, AWS Route 53...)
107107
#### (a). Find the Kubernetes nginx ingress named 'fedml-model-inference-gateway' in your Kubernetes cluster.
@@ -150,7 +150,7 @@ Pull remote model(ModelOps) to local model repository:
150150
1. Q: Supports automatically scale?
151151
A: Yes. Call CLI `helm upgrade`. For example, you can do upgrade by using the following CLI:
152152

153-
```helm upgrade --set "autoscaling.enabled=true" --set replicaCount=$InstanceNumber fedml-model-premise-master fedml-model-premise-master-0.7.397.tgz -n $YourNameSpace```
153+
```helm upgrade --set "autoscaling.enabled=true" --set replicaCount=$InstanceNumber fedml-model-premise-master fedml-model-premise-master-latest.tgz -n $YourNameSpace```
154154

155155

156156
2. Q: Does the inference endpoint supports private IP? \

python/fedml/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
_global_training_type = None
2424
_global_comm_backend = None
2525

26-
__version__ = "0.8.2a3"
26+
__version__ = "0.8.2a4"
2727

2828

2929
def init(args=None):

python/fedml/cli/comm_utils/sys_utils.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -474,10 +474,14 @@ def get_device_id_in_docker():
474474

475475
if os.path.exists(docker_env_file) or os.path.exists(cgroup_file):
476476
if os.path.exists(product_uuid_file):
477-
with open(product_uuid_file, 'r') as f:
478-
device_id = f.readline().rstrip("\n").strip(" ")
479-
if device_id == "":
480-
device_id = str(uuid.uuid4())
477+
try:
478+
with open(product_uuid_file, 'r') as f:
479+
device_id = f.readline().rstrip("\n").strip(" ")
480+
if device_id == "":
481+
device_id = str(uuid.uuid4())
482+
return f"{device_id}-docker"
483+
except Exception as e:
484+
device_id = str(uuid.uuid4())
481485
return f"{device_id}-docker"
482486
return None
483487

python/fedml/core/mlops/mlops_runtime_log.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ def log_ntp_time(sec, what):
119119
ntp_time_seconds = time.time()
120120
ntp_time = datetime.datetime.fromtimestamp(ntp_time_seconds)
121121
return ntp_time.timetuple()
122-
# logging.Formatter.converter = log_ntp_time
122+
logging.Formatter.converter = log_ntp_time
123123
stdout_handle = logging.StreamHandler()
124124
stdout_handle.setFormatter(format_str)
125125
if show_stdout_log:

python/setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ def finalize_options(self):
8888

8989
setup(
9090
name="fedml",
91-
version="0.8.2a3",
91+
version="0.8.2a4",
9292
author="FedML Team",
9393
author_email="ch@fedml.ai",
9494
description="A research and production integrated edge-cloud library for "

0 commit comments

Comments
 (0)