Skip to content
Open

Faqs #11788

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
5ff5488
Create deployment.yaml
mehar221 May 9, 2025
2c5a958
Create service.yaml
mehar221 May 9, 2025
20e7700
Create configmap.yaml
mehar221 May 9, 2025
658f2df
Create Chart.yaml
mehar221 May 9, 2025
2c3b07d
Create values.yaml
mehar221 May 9, 2025
d365cd8
Create override-values.yaml
mehar221 May 9, 2025
2d725be
Rename override-values.yaml to override-values.json
mehar221 May 9, 2025
eb9bcf6
Update override-values.json
mehar221 May 9, 2025
603e789
Update and rename values.yaml to values.json
mehar221 May 9, 2025
42eda1b
Create config.json
mehar221 May 9, 2025
5560537
Update configmap.yaml
mehar221 May 9, 2025
e75da6a
Update configmap.yaml
mehar221 May 9, 2025
6156640
Update configmap.yaml
mehar221 May 9, 2025
6f7a1f9
Update configmap.yaml
mehar221 May 9, 2025
db9525e
Create config.json
mehar221 May 9, 2025
1b51f12
Update configmap.yaml
mehar221 May 9, 2025
dfb8fa7
Update configmap.yaml
mehar221 May 9, 2025
75dc517
Update configmap.yaml
mehar221 May 14, 2025
640efca
Update configmap.yaml
mehar221 May 19, 2025
2ae4739
Update values.json
mehar221 May 19, 2025
8afee64
Update configmap.yaml
mehar221 May 20, 2025
1775a87
Update configmap.yaml
mehar221 May 20, 2025
2f6b73c
Update configmap.yaml
mehar221 May 20, 2025
2056381
Delete my-helm-chart directory
mehar221 May 20, 2025
5611aae
Create configmap.yaml
mehar221 May 20, 2025
05ae9f6
Create deployment.yaml
mehar221 May 20, 2025
4acc02c
Create service.yaml
mehar221 May 20, 2025
252ff0b
Create values.yaml
mehar221 May 20, 2025
c96d892
Update values.yaml
mehar221 May 20, 2025
6dac302
Create Chart.yaml
mehar221 May 20, 2025
9f4379c
Update configmap.yaml
mehar221 May 20, 2025
748eff5
Create config.json
mehar221 May 20, 2025
cbd665b
Update configmap.yaml
mehar221 May 20, 2025
dd08bb7
Update configmap.yaml
mehar221 May 20, 2025
bbcbf9a
Update config.json
mehar221 May 20, 2025
41be439
Update values.yaml
mehar221 May 20, 2025
7f75759
Create values.yaml
mehar221 May 20, 2025
c4bf3e4
Delete my-helm-chart/values.yaml
mehar221 May 20, 2025
e59d4dc
Update configmap.yaml
mehar221 May 20, 2025
d9f3dbb
Update values.yaml
mehar221 May 20, 2025
9939a19
Update config.json
mehar221 May 20, 2025
b2505f4
Update config.json
mehar221 May 20, 2025
0589918
Update configmap.yaml
mehar221 May 20, 2025
09ebd49
Update config.json
mehar221 May 20, 2025
e42580f
Update configmap.yaml
mehar221 May 20, 2025
aec360a
Update config.json
mehar221 May 20, 2025
e07278e
Update service.yaml
mehar221 May 20, 2025
5e9b338
Create secrets.txt
mehar221 Jun 12, 2025
8d204b3
Merge branch 'harness:main' into main
mehar221 Oct 31, 2025
b3e020e
Update internal-developer-portal.md
mehar221 Oct 31, 2025
2fb8b24
Update harness-platform-faqs
mehar221 Oct 31, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 70 additions & 1 deletion docs/faqs/internal-developer-portal.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,73 @@ During onboarding into IDP we mass onboard all the services using a `catalog-inf

3. In some cases the entities get into the `hasError` state. You can know whether the entity is in orphaned state or `hasError` state, by checking for the **Processing Status** dropdown on the Catalog page

4. Additionally, here is an example [script](https://github.com/harness-community/idp-samples/blob/main/catalog-scripts/identify-and-delete-orphan-entity.py) that finds and delete all the entities that has `NotFoundError`, because the `source-location` for these entities are no more valid (YAML files moved or renamed).
4. Additionally, here is an example [script](https://github.com/harness-community/idp-samples/blob/main/catalog-scripts/identify-and-delete-orphan-entity.py) that finds and delete all the entities that has `NotFoundError`, because the `source-location` for these entities are no more valid (YAML files moved or renamed).

### What is the purpose of using backend proxies in IDP?

Backend proxies in IDP allow you to fetch external data sources like JSON files directly from GitHub or Harness Code, without needing a dedicated backend API. This helps centralize input data and simplifies workflow management.

### How does the GitHub raw proxy configuration work?

The GitHub raw proxy redirects requests from
/api/proxy/github-raw/ → https://raw.githubusercontent.com/,
using the PROXY_GITHUB_TOKEN for authentication. It enables you to retrieve JSON files from GitHub repositories and use their contents dynamically in workflows.

### How can I configure a backend proxy for Harness Code?

You can define the following proxy in your plugin configuration:

proxy:
endpoints:
/harness-code:
target: https://app.harness.io/gateway/code/api/v1/repos/<account_id>/
pathRewrite:
/api/proxy/harness-code/?: /
headers:
x-api-key: ${PROXY_HARNESS_TOKEN}


This setup uses a Harness API key stored in a secret (PROXY_HARNESS_TOKEN) to authenticate and fetch raw JSON files from Harness Code repositories.

### What type of authentication is needed for these proxies?

GitHub Proxy: Requires a GitHub PAT stored as a Harness secret (PROXY_GITHUB_TOKEN).

Harness Code Proxy: Requires a Harness API key with read access to Code repositories (PROXY_HARNESS_TOKEN).
Both tokens are securely referenced from Harness secrets.

### How can I use the backend proxy in a workflow to populate dropdowns?

Use the SelectFieldFromApi field in your workflow YAML:

properties:
some-property:
type: string
ui:field: SelectFieldFromApi
ui:options:
title: Some Property
description: An input for users to select
path: "proxy/harness-code/<org>/<project>/<repo>/+/raw/<path to json>"


The dropdown values will be fetched from the JSON file stored in your repository.

### Can I simplify proxy paths by locking the configuration to a specific org or repo?

Yes. You can define organization-, project-, or repository-specific targets in your proxy configuration. This reduces the amount of information you need to pass in the workflow path.

### How can I bulk register multiple components in Harness IDP?

You can use the Catalog API with a custom script that iterates over multiple catalog locations. The script uses account details, API keys, and bearer tokens to automate the registration process for multiple component URLs in one go.

### What is the purpose of the token field in workflow YAMLs?

The token field (with ui:widget: password and ui:field: HarnessAuthToken) securely fetches the user’s short-lived session token for API calls during workflow execution. It ensures authentication without exposing sensitive credentials on the UI.

### Why is the token visible during the “Review” step in multi-page workflows?

If the token field is placed on any page other than the first, its ui:widget: password property isn’t evaluated correctly, causing the token to appear as plain text during the Review step. This is a known issue after the Backstage upgrade to v1.28.

### How can I prevent tokens from being exposed in multi-page workflows?

To hide tokens properly, move the token field to the first page (spec.parameters[0]) of the workflow YAML. This ensures the token remains masked (*****) throughout execution and prevents exposure during the Review step.
79 changes: 79 additions & 0 deletions docs/platform/harness-platform-faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -3441,3 +3441,82 @@ Yes, Harness has an API to check the status of the deployment. You can check her

### How can user restart a delegate?
User can restart the delegate by deleting the pod itself.

### How does SCIM help with user synchronization in Harness?
SCIM (System for Cross-domain Identity Management) ensures continuous and real-time synchronization of user groups and access rights between your SAML provider and Harness. By enabling SCIM, user additions and updates in Okta (or another SAML provider) are automatically reflected in Harness, ensuring that users inherit the correct permissions and access levels. This helps in maintaining accurate access control across systems without manual intervention.

### What happens if the email addresses in Okta and Harness do not match?
For SAML SSO to work properly, the email addresses in Okta and Harness must match exactly. If there is a mismatch or case sensitivity issue, Harness will convert the email address to lowercase before registering it. Therefore, ensure that users are invited to Harness using the same email address as in Okta to avoid login issues.

### What should I do if a user is a member of more than 150 groups in Microsoft Entra ID?
When a user is part of more than 150 groups, Microsoft Entra ID limits the groups that can be included in the SAML token. In this case, a link to the Graph endpoint for retrieving group information will be included instead. To enable this functionality in Harness, you must configure the Azure app with a Client ID and Client Secret, and ensure that the required API permissions, such as User.Read.All, Directory.Read.All, and Group.Read.All, are granted for the app.

### How do I configure the Unique User Identifier for Azure SSO in Harness?
To configure the Unique User Identifier for Azure SSO in Harness, set the Azure app's user.userprincipalname as the Unique User Identifier and choose "Email address" as the Name Identifier format. If user.userprincipalname isn't used, ensure the correct attribute is mapped to the Unique User Identifier with the email address.

### Can I synchronize LDAP users manually with a Harness User Group?
Yes, you can manually synchronize LDAP users with a Harness User Group. To do this:
Link your Harness User Group to the LDAP SSO configuration.
Go to Account Settings and click Authentication.
In the Login via LDAP section, click the three dots next to your LDAP SSO configuration and select Synchronize User Groups. This will allow you to manually trigger synchronization of LDAP users into the associated Harness User Group.

### Can I configure multiple OIDC providers for authentication in Harness?
Yes, Harness supports the configuration of multiple OIDC providers. This means you can authenticate users from different identity providers, providing flexibility for organizations that use more than one authentication service. Each provider can be set up to handle user access and provisioning separately.

### Why should I use CyberArk Conjur for secret management in Harness?
CyberArk Conjur provides robust security features like role-based access control (RBAC), auditing, and encryption to manage sensitive data. By integrating it with Harness, you can centralize your secret management and avoid hardcoding secrets in your pipeline or application configurations. This integration also allows you to automate secret retrieval during deployments, making your processes more secure and efficient while ensuring compliance with organizational security policies.

### What is CyberArk Conjur and how does it integrate with Harness for secret management?
CyberArk Conjur is a secrets management solution that helps organizations securely store and manage sensitive data, such as credentials, keys, and API tokens. When integrated with Harness, it enables users to securely store secrets and use them within Harness workflows, such as deployment pipelines. By linking CyberArk Conjur as a custom secret manager in Harness, you can ensure that sensitive information is fetched securely from Conjur and used dynamically during various operations without exposing it in the open.

### Why are file secrets not masked in Harness logs?
In Harness, file secrets are not masked in logs because they can vary in format (e.g., JSON, YAML, etc.) and can contain larger, more complex data compared to text secrets. While text secrets are securely masked in logs, file secrets may include critical configuration or key data that needs to be decoded. Therefore, it's important to handle file secrets carefully, ensuring that access to logs is restricted and that appropriate security measures, such as RBAC and encrypted storage, are in place to protect sensitive information.

### What are the key permissions required for a Nexus connector in Harness?
To successfully connect and use a Nexus connector in Harness, the user account associated with the connector must have specific permissions on the Nexus Server. These include the "Repo: All repositories (Read)" permission, which allows the account to access all repositories, and the "Nexus UI: Repository Browser" permission, which grants access to the repository browsing functionality. Additionally, if using Nexus 3 as a Docker repository, the account must have the "nx-repository-view-__*" privilege to interact with Docker images in the repository. These permissions are essential for the connector to work properly with Nexus repositories.

### What are Verification Providers in Harness?
Verification Providers in Harness allow you to integrate monitoring and logging systems to verify the success of deployments. By connecting platforms like AppDynamics, Prometheus, New Relic, and others, you can monitor the health of your deployed applications, validate metrics, and check the system's behavior after an update. This integration ensures that your deployments align with the expected performance criteria, providing real-time feedback on the deployment status.

### How Does Service Discovery Work with Multiple Namespaces?
Harness allows service discovery across multiple namespaces by configuring inclusion or exclusion settings in the discovery agent. This setup enables the agent to discover services in specified namespaces, requiring appropriate roles and permissions for the service account to manage the discovery process.

### How Does OAuth Enhance Security in Git Operations with Harness?
OAuth integration enhances security by ensuring that commits to Git are made using authorized user credentials, rather than shared or stored Git credentials. This provides better control over who is making changes, prevents unauthorized access, and complies with security practices by storing access tokens securely in the Harness user profile.

### What are the benefits of using a local NTP server or Google Public NTP for time synchronization?
Using a local NTP server or Google Public NTP ensures your systems are always in sync with a reliable time source. This reduces the chances of errors, helps systems stay consistent, and makes sure everything is on the same page in terms of time.

### What happens if my pipeline and its referenced entities are in different Git repositories?
Answer: Don’t worry; Harness has got your back! If your pipeline references templates or other entities from a different repository, it pulls them from the default branch of those repos. This ensures you’re always working with stable and tested configurations. It’s like having a super-efficient delivery system that always fetches the freshest, most reliable resources for your pipeline execution, no matter where they live!

### Can I be a Git ninja and enforce Git Experience across all my Harness resources?
Answer: Absolutely! By enabling "Enforce Git Experience," you’re telling Harness to only save your resources in Git repositories. It’s like creating a magical barrier that keeps everything neat and organized in Git, and no more inline pipelines slipping through the cracks. This setting makes sure your team works with only the most up-to-date, tested configurations — no more mess, just streamlined, version-controlled deployment magic!

### What does the FibonacciBackOff - SocketTimeoutException: Connect timed out error mean?
This error means the Harness Delegate can't reach the Manager endpoint, likely due to:
Wrong Manager URL in the delegate YAML
Network/firewall blocking port 443
DNS issues or manager downtime
Fix:
Verify MANAGER_HOST_AND_PORT is correct
Run curl or telnet to check connectivity from the node
Ensure port 443 is open and DNS resolves the manager URL
The delegate retries using a Fibonacci delay. Fix the network/config and it’ll reconnect automatically.

### What should I do if an inline entity is accidentally deleted in Harness?
If an inline entity (not stored in Git) is deleted from the Harness UI, it cannot be automatically restored like remote Git-backed entities.
What you can do:
Check Audit Trails to view previous configurations.
Manually recreate the entity using this data.
Unfortunately, there's no direct restore option for inline entities — manual recreation is the only recovery method.

### How can I stop and start an EC2 instance using a Harness pipeline?
You can stop and start an EC2 instance in a Harness pipeline using a Shell Script step that runs an AWS CLI command or a Python script with the Boto3 library. Ensure the delegate has AWS CLI and Python with Boto3 installed, and that AWS credentials are configured (via environment variables or IAM role/profile). This is helpful for workflows like upgrading EC2 instance types, where stopping and starting is required.

### What does "Incomplete Secret" mean in the context of creating secrets via YAML?
An "Incomplete Secret" status indicates that the secret has been created successfully, but it is not fully usable. This occurs because the secret value cannot be provided in plain text within the YAML file. The value must be entered manually in the visual mode after the skeleton of the secret is created via YAML.

### How can I complete an "Incomplete Secret"?
After creating the secret skeleton via YAML, you need to manually edit the secret in the visual mode of your platform. There, you can input the secret value into the appropriate password field.

7 changes: 7 additions & 0 deletions my-helm-chart/overrides/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
myconfig: |-
{
\"authConfig\": {
\"tenant\": \"OIDP\",
\"loginApiUrl\": \"ksdn\"
}
}
6 changes: 6 additions & 0 deletions my-helm-chart/templates/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
apiVersion: v2
name: rac-app
description: A Helm chart for deploying the RAC application
type: application
version: 0.1.0
appVersion: "1.0.0"
7 changes: 7 additions & 0 deletions my-helm-chart/templates/configmap.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: rac-config-files
data:
config.json: |-
{{ toYaml .Values.myconfig | indent 2 }}
27 changes: 27 additions & 0 deletions my-helm-chart/templates/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-app
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}-app
template:
metadata:
labels:
app: {{ .Release.Name }}-app
spec:
containers:
- name: app-container
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.service.port }}
volumeMounts:
- name: config-volume
mountPath: /app/config
readOnly: true
volumes:
- name: config-volume
configMap:
name: rac-config-files
11 changes: 11 additions & 0 deletions my-helm-chart/templates/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: svc-new
spec:
type: ClusterIP
selector:
app: test-app
ports:
- port: 80
targetPort: 80
10 changes: 10 additions & 0 deletions my-helm-chart/templates/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
replicaCount: 1

image:
repository: library/nginx
tag: latest

service:
type: ClusterIP
port: 80

1 change: 1 addition & 0 deletions secrettesting/secrets.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Name: "Manisha"