From 7.5.x to 7.17.0

  1. Downloaded all the deb files
  2. Up-gradation of Kibana, APM, , Logstash, Filebeat went through without any hiccups.
  3. There was an issue however, cleaning the machine with following command, resolved it.
 sudo apt-get remove elasticsearch
sudo apt-get --purge autoremove elasticsearch
sudo dpkg --remove elasticsearch
sudo dpkg --purge elasticsearch
sudo dpkg --purge --force-all elasticsearch
#sudo rm -rf /var/lib/elasticsearch/ # this will remove the indexed data
# https://stackoverflow.com/a/33303945/5305401
sudo rm -rf /etc/elasticsearch

Added Enterprise Search component while upgrading

issues faced while adding:

  • HTTPS issue

    After hit and trial this combination of specs worked: Note: while using elastic on “HTTPS”, not setting SSL config and just using verify ssl as false did not work: enterprise search server was not able to detect the elasticsearch service.

    # ref: https://www.elastic.co/guide/en/enterprise-search/current/configuration.html
    ## ================= Elastic Enterprise Search Configuration ==================
    #
    # NOTE: Elastic Enterprise Search comes with reasonable defaults.
    #       Before adjusting the configuration, make sure you understand what you
    #       are trying to accomplish and the consequences.
    #
    # NOTE: For passwords, the use of environment variables is encouraged
    #       to keep values from being written to disk, e.g.
    #       elasticsearch.password: ${ELASTICSEARCH_PASSWORD:changeme}
    #
    # ---------------------------------- Secrets ----------------------------------
    #
    # Encryption keys to protect your application secrets. This field is required.
    #
    # inject
    secret_management.encryption_keys: ['a532a5da3b88e5cc19ff8a23b82a590b21cfc4aeb1fdc1c6efe574a94f20607e']
    #
    # ------------------------------- Elasticsearch -------------------------------
    #
    # Enterprise Search needs one-time permission to alter Elasticsearch settings.
    # Ensure the Elasticsearch settings are correct, then set the following to
    # true. Or, adjust Elasticsearch's config/elasticsearch.yml instead.
    # See README.md for more details.
    #
    allow_es_settings_modification: true
    #
    # Elasticsearch full cluster URL:
    #
    #inject
    # elasticsearch.host: https://127.0.0.1:9200
    elasticsearch.host: "https://{{ elasticsearch_logs_server }}:{{ elasticsearch_logs_port }}"
    #
    # Elasticsearch credentials:
    #
    ent_search.auth.source: elasticsearch-native
    # ent_search.auth.source: standard
    elasticsearch.username: "{{ elasticsearch_logs_admin_username }}"
    # inject
    elasticsearch.password: "{{ elasticsearch_logs_admin_password }}"
    #
    # Elasticsearch custom HTTP headers to add to each request:
    #
    #elasticsearch.headers:
    #  X-My-Header: Contents of the header
    #
    #elasticsearch.requestHeadersWhitelist: [ es-security-runas-user, authorization ]
    # Elasticsearch SSL settings:
    #
    elasticsearch.ssl.enabled: true
    #elasticsearch.ssl.certificate:
    elasticsearch.ssl.certificate_authority: "{{ enterprise_search_ca_path }}"
    #elasticsearch.ssl.key:
    #elasticsearch.ssl.key_passphrase:
    elasticsearch.ssl.verify: true
    #
    # Elasticsearch startup retry:
    #
    #elasticsearch.startup_retry.enabled: true
    #elasticsearch.startup_retry.interval: 5 # seconds
    #elasticsearch.startup_retry.fail_after: 200 # seconds
    #
    # ------------------------------- Hosting & Network ---------------------------
    #
    # Define the exposed URL at which users will reach Enterprise Search.
    # Defaults to localhost:3002 for testing purposes.
    # Most cases will use one of:
    #
    # * An IP: http://255.255.255.255
    # * A FQDN: http://example.com
    # * Shortname defined via /etc/hosts: http://ent-search.search
    #
    # inject
    # ent_search.external_url: http://search-1.test.kfupm.edu.sa
    # ent_search.external_url: https://search-api.test.kfupm.edu.sa
    #
    # Web application listen_host and listen_port.
    # Your application will run on this host and port.
    #
    # * ent_search.listen_host: Must be a valid IPv4 or IPv6 address.
    # * ent_search.listen_port: Must be a valid port number (1-65535).
    #
    # inject
    ent_search.listen_host: 127.0.0.1
    ent_search.listen_port: "{{ enterprise_search_port }}"
    
    # ------------------------------ Authentication -------------------------------
    #
    # The origin of authenticated Enterprise Search users.
    # Options are standard, elasticsearch-native, and elasticsearch-saml.
    #
    # Docs: https://www.elastic.co/guide/en/workplace-search/current/workplace-search-security.html
    #
    # * standard: Users are created within the Enterprise Search dashboard.
    # * elasticsearch-native: Users are managed via the Elasticsearch native realm.
    # * elasticsearch-saml: Users are managed via the Elasticsearch SAML realm.
    #
    #ent_search.auth.source: standard
    #
    # (SAML only) Name of the realm within the Elasticsearch realm chain.
    #
    #ent_search.auth.name:
    #
    # Adds a message to the login screen. Useful for displaying information about maintenance windows,
    # links to corporate sign up pages, etc. This field supports Markdown.
    #
    #ent_search.login_assistance_message:
    #
    # ---------------------------------- Limits -----------------------------------
    #
    # Configurable limits for Enterprise Search.
    # NOTE: Overriding the default limits can impact performance negatively.
    #       Also, changing a limit here does not actually guarantee that
    #       Enterprise Search will work as expected as related Elasticsearch limits
    #       can be exceeded.
    #
    #### Workplace Search
    #
    # Configure the maximum allowed document size for Custom API Sources.
    #
    #workplace_search.custom_api_source.document_size.limit: 100kb
    #
    # Configure how many fields a Custom API Source can have.
    # NOTE: The Elasticsearch/Lucene setting `indices.query.bool.max_clause_count`
    # might also need to be adjusted if "Max clause count exceeded" errors start
    # occurring. See more here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-settings.html
    #
    #workplace_search.custom_api_source.total_fields.limit: 64
    #
    #### App Search
    #
    # Configure the maximum allowed document size.
    #
    #app_search.engine.document_size.limit: 100kb
    #
    # Configure how many fields an engine can have.
    # NOTE: The Elasticsearch/Lucene setting `indices.query.bool.max_clause_count`
    # might also need to be adjusted if "Max clause count exceeded" errors start
    # occurring. See more here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-settings.html
    #
    #app_search.engine.total_fields.limit: 64
    #
    # Configure how many source engines a meta engine can have.
    #
    #app_search.engine.source_engines_per_meta_engine.limit: 15
    #
    # Configure how many facet values can be returned by a search.
    #
    #app_search.engine.total_facet_values_returned.limit: 250
    #
    # Configure how big full-text queries are allowed.
    # NOTE: The Elasticsearch/Lucene setting `indices.query.bool.max_clause_count`
    # might also need to be adjusted if "Max clause count exceeded" errors start
    # occurring. See more here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-settings.html
    #
    #app_search.engine.query.limit: 128
    #
    # Configure total number of synonym sets an engine can have.
    #
    #app_search.engine.synonyms.sets.limit: 256
    #
    # Configure total number of terms a synonym set can have.
    #
    #app_search.engine.synonyms.terms_per_set.limit: 32
    #
    # Configure how many analytics tags can be associated with a single query or clickthrough.
    #
    #app_search.engine.analytics.total_tags.limit: 16
    #
    # ---------------------------------- Workers ----------------------------------
    #
    # Configure the number of worker threads.
    #
    #worker.threads: 4
    #
    # ----------------------------------- APIs ------------------------------------
    #
    # Set to true hide product version information from API responses.
    #
    #hide_version_info: false
    #
    # ---------------------------------- Mailer -----------------------------------
    #
    # Connect Enterprise Search to a mailer.
    # Docs: https://www.elastic.co/guide/en/workplace-search/current/workplace-search-smtp-mailer.html
    #
    #email.account.enabled: false
    #email.account.smtp.auth: plain
    #email.account.smtp.starttls.enable: false
    #email.account.smtp.host: 127.0.0.1
    #email.account.smtp.port: 25
    #email.account.smtp.user:
    #email.account.smtp.password:
    #email.account.email_defaults.from:
    #
    # ---------------------------------- Logging ----------------------------------
    #
    # Choose your log export path.
    #
    #log_directory: log
    #
    # Log level can be: debug, info, warn, error, fatal, or unknown.
    #
    #log_level: info
    #
    # Log format can be: default, json
    #
    #log_format: default
    #
    # Choose your Filebeat logs export path.
    #
    #filebeat_log_directory: log
    #
    # Use Index Lifecycle Management (ILM) to manage analytics and API logs
    # retention.
    #
    # auto: Use ILM when supported by the underlying Elasticsearch cluster
    # true: Use ILM (requires ILM support in the underlying Elasticsearch cluster)
    # false: Don't use ILM (analytics and API logs will grow unconstrained)
    #
    #ilm.enabled: auto
    #
    # Enable logging app logs to stdout (enabled by default).
    #
    #enable_stdout_app_logging: true
    #
    # The number of files to keep on disk when rotating logs. When set to 0, no
    # rotation will take place.
    #
    #log_rotation.keep_files: 7
    #
    # The maximum file size in bytes before rotating the log file. If
    # log_rotation.keep_files is set to 0, no rotation will take place and there
    # will be no size limit for the singular log file.
    #
    #log_rotation.rotate_every_bytes: 1048576 # 1 MiB
    #
    # ---------------------------------- TLS/SSL ----------------------------------
    #
    # Configure TLS/SSL encryption.
    #
    #ent_search.ssl.enabled: false
    #ent_search.ssl.keystore.path:
    #ent_search.ssl.keystore.password:
    #ent_search.ssl.keystore.key_password:
    #ent_search.ssl.redirect_http_from_port:
    #
    # ---------------------------------- Session ----------------------------------
    #
    # Set a session key to persist user sessions through process restarts.
    #
    #secret_session_key:
    #
    # --------------------------------- Telemetry ---------------------------------
    #
    # Reporting your basic feature usage statistics helps us improve your user
    # experience. Your data is never shared with anyone.
    #
    # Set to false to disable telemetry capabilities entirely. You can alternatively
    # opt out through the Settings page.
    #
    #telemetry.enabled: true
    #
    # If false, collection of telemetry data is disabled; however, it can be
    # enabled via the Settings page if telemetry.allow_changing_opt_in_status is
    # true.
    #
    #telemetry.opt_in: true
    #
    # If true, users are able to change the telemetry setting at a later time
    # through the Settings page. If false, the value of telemetry.opt_in determines
    # whether to send telemetry data or not.
    #
    #telemetry.allow_changing_opt_in_status: true
    #
    # ----------------------------- Diagnostics report ----------------------------
    #
    # Path where diagnostic reports will be generated.
    #
    #diagnostic_report_directory: diagnostics
    kibana.external_url: http://localhost:5601
    
    
  • migration or existing indices issue

    The service was not getting started because of migrations or user problem Error:

    Error: User 5e46f2c013cdb294490309e3 has no external identities at /Users/spetri/Downloads/enterprise-search-7.10.1/lib/war/shared_togo/db/migrate/20200512150416_add_elasticsearch_username_and_auth_source_to_user.class:26:in `block in up'
    

    Solved this problem by deleting app-search and ent-search indices, following elastic ref.

    DELETE /.app-search-* # delete app search indices
    DELETE /.ent-search-* # delete  ent search
    
  • Starting [[enterprise_search–20220207-144148.md][Enterprise Search

    ]] According to the documentation the enterprise search should be started as

    ENT_SEARCH_DEFAULT_PASSWORD=passwordexample bin/enterprise-search
    

    But this requires a terminal, with Ansible, using shell and appending “&” did not work:

    - name Restart enterprise server
    shell:  ENT_SEARCH_DEFAULT_PASSWORD=passwordexample bin/enterprise-search &
    

    Solution was using nohup: This discussion help in finding this solution.

    - name: Restart enterprise server
      shell: cd /tmp; nohup /usr/share/enterprise-search/bin/enterprise-search > nohup.out 2>&1 &
      become: yes
      become_user: enterprise-search
    

Issues faced in production

/etc/elasticsearch/log4j2.properties file was not getting created in prodution

Solution: Ansible role, v7.17.0, uses es_config_log4j2 variable, to enable adding log4j2 properties. The default value of this variable is “”. Because of this reason the file was not getting copied in production. However, the same role worked in testing, now idea how.

 es_config_log4j2: "test/integration/files/custom_config/log4j2.properties"

APM indices issue

After the completing the upgrade in production, APM server was not able to index data into elasticsearch. Warning/Error:

{"type":"illegal_argument_exception","reason":"no write index is defined for alias [apm-7.5.1-span]. The write index may be explicitly disabled using is_write_index=false or the alias points to multiple indices without one being designated as a write index"}

To solve raised the ticket/case with elastic support. ticket url The solution they suggested:

  1. GET _cat/indices/apm-7.5.1-span* to get the latest backing index.
  2. Let’s try just setting the latest to be the write alias:

POST /_aliases { “actions” : [ { “add” : { “index” : “apm-7.5.1-span-000015”, “alias” : “apm-7.5.1-span”, “is_write_index”: true } } ] }

  • Related info

    To get aliases info

    curl 'localhost:9200/_cat/aliases?v
    

    from here

  • (status=403): “type”:“cluster_block_exception”,“reason”:“index [apm-7.5.1-span-000009] blocked by: [FORBIDDEN/8/index write (api)];”

    Solution

     PUT /apm-7.5.1-metric-000017/_settings
    { "index": { "blocks": { "write": "false" } } }
    

    This can be used to find block status of all the indices.

    GET /*/_settings?filter_path=*.settings.index.blocks
    

    comment from elastic support: Which can be helpful if you’re needing to track down the rest. Legitimate write blocks will exist for rolled over indices - so apm-7.5.1-span-000001 will have a block, while the current iteration should be writeable, so probably just be aware that some of the older indices are ok to be blocked when/if you run that request.*

    Further suggestions by elastic support:

    Normally, you can correct these surgically - removing the write blocks from only the indices that are effected. I think for this though, you can probably just do something like:

    
    PUT /apm-7.5.1-*/_settings
    {
      "blocks.write": null
    }
    

    This way we just clear them all - ILM will still only be writing to the current “is_write_index” so it shouldn’t be a problem.

From 7.17.0 to 8.1.2

@jsermer We’ve discussed it internally and have decided to officially stop development of the ansible-elasticsearch and ansible-beats module once 8.X is released. from here The ansible role is no longer maintained by elastic company.

I had to role back changes to 7.17.2 (latest version in 7 series on <2022-04-11 Mon>) again.

I could complete the upgrade to even 7.17.2 version, rolled back to 7.17.0. May be later.