orchestrator高可用yum快速安装过程-创新互联
[root@mgr1 ~]# curl -s https://packagecloud.io/install/repositories/github/orchestrator/script.rpm.sh | sudo bash Detected operating system as centos/7. Checking for curl... Detected curl... Downloading repository file: https://packagecloud.io/install/repositories/github/orchestrator/config_file.repo?os=centos&dist=7&source=script done. Installing pygpgme to verify GPG signatures... Loaded plugins: fastestmirror Determining fastest mirrors * base: mirror.dal.nexril.net * extras: mirrors.huaweicloud.com * updates: mirrors.tuna.tsinghua.edu.cn base | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 github_orchestrator-source/signature | 819 B 00:00:00 Retrieving key from https://packagecloud.io/github/orchestrator/gpgkey Importing GPG key 0x7AC40831: Userid : "https://packagecloud.io/github/orchestrator (https://packagecloud.io/docs#gpg_signing)创新互联主营婺城网站建设的网络公司,主营网站建设方案,app软件开发公司,婺城h5微信小程序开发搭建,婺城网站营销推广欢迎婺城等地区企业咨询" Fingerprint: 1580 fbdf 6d61 7952 e2e5 e859 f3e4 3403 7ac4 0831 From : https://packagecloud.io/github/orchestrator/gpgkey github_orchestrator-source/signature | 951 B 00:00:00 !!! proxysql_repo | 2.9 kB 00:00:00 updates | 3.4 kB 00:00:00 (1/3): extras/7/x86_64/primary_db | 215 kB 00:00:01 (2/3): proxysql_repo/7/primary_db | 7.5 kB 00:00:02 (3/3): updates/7/x86_64/primary_db | 7.4 MB 00:00:19 github_orchestrator-source/primary | 175 B 00:00:01 Package pygpgme-0.3-9.el7.x86_64 already installed and latest version Nothing to do Installing yum-utils... Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.dal.nexril.net * extras: mirrors.huaweicloud.com * updates: mirrors.tuna.tsinghua.edu.cn Resolving Dependencies --> Running transaction check ---> Package yum-utils.noarch 0:1.1.31-50.el7 will be installed --> Processing Dependency: python-kitchen for package: yum-utils-1.1.31-50.el7.noarch --> Processing Dependency: libxml2-python for package: yum-utils-1.1.31-50.el7.noarch --> Running transaction check ---> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed ---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed --> Processing Dependency: python-chardet for package: python-kitchen-1.1.1-5.el7.noarch --> Running transaction check ---> Package python-chardet.noarch 0:2.2.1-1.el7_1 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================= Package Arch Version Repository Size ================================================================================================================================================================================================= Installing: yum-utils noarch 1.1.31-50.el7 base 121 k Installing for dependencies: libxml2-python x86_64 2.9.1-6.el7_2.3 base 247 k python-chardet noarch 2.2.1-1.el7_1 base 227 k python-kitchen noarch 1.1.1-5.el7 base 267 k Transaction Summary ================================================================================================================================================================================================= Install 1 Package (+3 Dependent packages) Total download size: 861 k Installed size: 4.3 M Downloading packages: (1/4): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:01 (2/4): python-chardet-2.2.1-1.el7_1.noarch.rpm | 227 kB 00:00:02 (3/4): yum-utils-1.1.31-50.el7.noarch.rpm | 121 kB 00:00:03 (4/4): python-kitchen-1.1.1-5.el7.noarch.rpm | 267 kB 00:00:03 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 251 kB/s | 861 kB 00:00:03 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : python-chardet-2.2.1-1.el7_1.noarch 1/4 Installing : python-kitchen-1.1.1-5.el7.noarch 2/4 Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 3/4 Installing : yum-utils-1.1.31-50.el7.noarch 4/4 Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/4 Verifying : python-kitchen-1.1.1-5.el7.noarch 2/4 Verifying : yum-utils-1.1.31-50.el7.noarch 3/4 Verifying : python-chardet-2.2.1-1.el7_1.noarch 4/4 Installed: yum-utils.noarch 0:1.1.31-50.el7 Dependency Installed: libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-chardet.noarch 0:2.2.1-1.el7_1 python-kitchen.noarch 0:1.1.1-5.el7 Complete! Generating yum cache for github_orchestrator... Importing GPG key 0x7AC40831: Userid : "https://packagecloud.io/github/orchestrator (https://packagecloud.io/docs#gpg_signing) " Fingerprint: 1580 fbdf 6d61 7952 e2e5 e859 f3e4 3403 7ac4 0831 From : https://packagecloud.io/github/orchestrator/gpgkey Generating yum cache for github_orchestrator-source... The repository is setup! You can now install packages. [root@mgr1 ~]# yum install orchestrator* Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.dal.nexril.net * extras: mirrors.huaweicloud.com * updates: mirrors.tuna.tsinghua.edu.cn Resolving Dependencies --> Running transaction check ---> Package orchestrator.x86_64 1:3.1.1-1 will be installed --> Processing Dependency: jq >= 1.5 for package: 1:orchestrator-3.1.1-1.x86_64 ---> Package orchestrator-cli.x86_64 1:3.1.1-1 will be installed --> Processing Dependency: jq >= 1.5 for package: 1:orchestrator-cli-3.1.1-1.x86_64 ---> Package orchestrator-client.x86_64 1:3.1.1-1 will be installed --> Processing Dependency: jq >= 1.5 for package: 1:orchestrator-client-3.1.1-1.x86_64 --> Finished Dependency Resolution Error: Package: 1:orchestrator-cli-3.1.1-1.x86_64 (github_orchestrator) Requires: jq >= 1.5 Error: Package: 1:orchestrator-3.1.1-1.x86_64 (github_orchestrator) Requires: jq >= 1.5 Error: Package: 1:orchestrator-client-3.1.1-1.x86_64 (github_orchestrator) Requires: jq >= 1.5 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest [root@mgr1 ~]# yum -y install epel-release Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.dal.nexril.net * extras: mirrors.huaweicloud.com * updates: mirrors.tuna.tsinghua.edu.cn Resolving Dependencies --> Running transaction check ---> Package epel-release.noarch 0:7-11 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================= Package Arch Version Repository Size ================================================================================================================================================================================================= Installing: epel-release noarch 7-11 extras 15 k Transaction Summary ================================================================================================================================================================================================= Install 1 Package Total download size: 15 k Installed size: 24 k Downloading packages: epel-release-7-11.noarch.rpm | 15 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : epel-release-7-11.noarch 1/1 Verifying : epel-release-7-11.noarch 1/1 Installed: epel-release.noarch 0:7-11 Complete! [root@mgr1 ~]# yum install orchestrator* Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile epel/x86_64/metalink | 6.3 kB 00:00:00 * base: mirror.dal.nexril.net * epel: mirrors.tuna.tsinghua.edu.cn * extras: mirrors.huaweicloud.com * updates: mirrors.tuna.tsinghua.edu.cn epel | 5.3 kB 00:00:00 (1/3): epel/x86_64/group_gz | 88 kB 00:00:01 (2/3): epel/x86_64/updateinfo | 993 kB 00:00:06 (3/3): epel/x86_64/primary_db | 6.8 MB 00:00:40 Resolving Dependencies --> Running transaction check ---> Package orchestrator.x86_64 1:3.1.1-1 will be installed --> Processing Dependency: jq >= 1.5 for package: 1:orchestrator-3.1.1-1.x86_64 ---> Package orchestrator-cli.x86_64 1:3.1.1-1 will be installed ---> Package orchestrator-client.x86_64 1:3.1.1-1 will be installed --> Running transaction check ---> Package jq.x86_64 0:1.5-1.el7 will be installed --> Processing Dependency: libonig.so.2()(64bit) for package: jq-1.5-1.el7.x86_64 --> Running transaction check ---> Package oniguruma.x86_64 0:5.9.5-3.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================= Package Arch Version Repository Size ================================================================================================================================================================================================= Installing: orchestrator x86_64 1:3.1.1-1 github_orchestrator 9.8 M orchestrator-cli x86_64 1:3.1.1-1 github_orchestrator 9.4 M orchestrator-client x86_64 1:3.1.1-1 github_orchestrator 15 k Installing for dependencies: jq x86_64 1.5-1.el7 epel 153 k oniguruma x86_64 5.9.5-3.el7 epel 129 k Transaction Summary ================================================================================================================================================================================================= Install 3 Packages (+2 Dependent packages) Total download size: 19 M Installed size: 40 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/jq-1.5-1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY ] 0.0 B/s | 0 B --:--:-- ETA Public key for jq-1.5-1.el7.x86_64.rpm is not installed (1/5): jq-1.5-1.el7.x86_64.rpm | 153 kB 00:00:00 (2/5): oniguruma-5.9.5-3.el7.x86_64.rpm | 129 kB 00:00:06 orchestrator-cli-3.1.1-1.x86_6 FAILED 14% [==========- ] 140 B/s | 2.9 MB 34:44:11 ETA https://packagecloud.io/github/orchestrator/el/7/x86_64/orchestrator-cli-3.1.1-1.x86_64.rpm: [Errno 12] Timeout on https://d28dx6y1hfq314.cloudfront.net/1358/4059/el/7/package_files/505827.rpm?t=1565775074_357146002ed4a0c21fdc1791b6355bf780e8d974: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. (3/5): orchestrator-client-3.1.1-1.x86_64.rpm | 15 kB 00:00:01 orchestrator-3.1.1-1.x86_64.rp FAILED 8% [====== ] 1.0 kB/s | 1.7 MB 04:50:53 ETA https://packagecloud.io/github/orchestrator/el/7/x86_64/orchestrator-3.1.1-1.x86_64.rpm: [Errno 12] Timeout on https://d28dx6y1hfq314.cloudfront.net/1358/4059/el/7/package_files/505831.rpm?t=1565775074_9cada29cb9634db102677cf02fd559fb1611af06: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. (4/5): orchestrator-cli-3.1.1-1.x86_64.rpm | 9.4 MB 00:00:21 (5/5): orchestrator-3.1.1-1.x86_64.rpm | 9.8 MB 00:00:25 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 174 kB/s | 19 MB 00:01:54 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : oniguruma-5.9.5-3.el7.x86_64 1/5 Installing : jq-1.5-1.el7.x86_64 2/5 Installing : 1:orchestrator-client-3.1.1-1.x86_64 3/5 Installing : 1:orchestrator-3.1.1-1.x86_64 4/5 Installing : 1:orchestrator-cli-3.1.1-1.x86_64 5/5 Verifying : 1:orchestrator-client-3.1.1-1.x86_64 1/5 Verifying : 1:orchestrator-3.1.1-1.x86_64 2/5 Verifying : oniguruma-5.9.5-3.el7.x86_64 3/5 Verifying : jq-1.5-1.el7.x86_64 4/5 Verifying : 1:orchestrator-cli-3.1.1-1.x86_64 5/5 Installed: orchestrator.x86_64 1:3.1.1-1 orchestrator-cli.x86_64 1:3.1.1-1 orchestrator-client.x86_64 1:3.1.1-1 Dependency Installed: jq.x86_64 0:1.5-1.el7 oniguruma.x86_64 0:5.9.5-3.el7 Complete! [root@mgr1 ~]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.56.13 mgr "/etc/hosts" 8L, 232C written [root@mgr1 ~]# cd /etc/ [root@mgr1 etc]# rz rz waiting to receive. Starting zmodem transfer. Press Ctrl+C to cancel. Transferring orchestrator.conf.json... 100% 4 KB 4 KB/sec 00:00:01 0 Errors [root@mgr1 etc]# cat orchestrator.conf.json { "Debug": true, "EnableSyslog": false, "ListenAddress": ":3000", "MySQLTopologyUser": "orchestrator", "MySQLTopologyPassword": "123456", "MySQLTopologyCredentialsConfigFile": "", "MySQLTopologySSLPrivateKeyFile": "", "MySQLTopologySSLCertFile": "", "MySQLTopologySSLCAFile": "", "MySQLTopologySSLSkipVerify": true, "MySQLTopologyUseMutualTLS": false, "BackendDB": "sqlite", "SQLite3DataFile": "/usr/local/orchestrator/orchestrator.sqlite3", "MySQLConnectTimeoutSeconds": 1, "DefaultInstancePort": 3306, "DiscoverByShowSlaveHosts": true, "InstancePollSeconds": 5, "DiscoveryIgnoreReplicaHostnameFilters": [ "a_host_i_want_to_ignore[.]example[.]com", ".*[.]ignore_all_hosts_from_this_domain[.]example[.]com" ], "UnseenInstanceForgetHours": 240, "SnapshotTopologiesIntervalHours": 0, "InstanceBulkOperationsWaitTimeoutSeconds": 10, "HostnameResolveMethod": "default", "MySQLHostnameResolveMethod": "@@hostname", "SkipBinlogServerUnresolveCheck": true, "ExpiryHostnameResolvesMinutes": 60, "RejectHostnameResolvePattern": "", "ReasonableReplicationLagSeconds": 10, "ProblemIgnoreHostnameFilters": [], "VerifyReplicationFilters": false, "ReasonableMaintenanceReplicationLagSeconds": 20, "CandidateInstanceExpireMinutes": 60, "AuditLogFile": "", "AuditToSyslog": false, "RemoveTextFromHostnameDisplay": ".mydomain.com:3306", "ReadOnly": false, "AuthenticationMethod": "", "HTTPAuthUser": "", "HTTPAuthPassword": "", "AuthUserHeader": "", "PowerAuthUsers": [ "*" ], "ClusterNameToAlias": { "127.0.0.1": "test suite" }, "SlaveLagQuery": "", "DetectClusterAliasQuery": "SELECT SUBSTRING_INDEX(@@hostname, '.', 1)", "DetectClusterDomainQuery": "", "DetectInstanceAliasQuery": "", "DetectPromotionRuleQuery": "", "DataCenterPattern": "[.]([^.]+)[.][^.]+[.]mydomain[.]com", "PhysicalEnvironmentPattern": "[.]([^.]+[.][^.]+)[.]mydomain[.]com", "PromotionIgnoreHostnameFilters": [], "DetectSemiSyncEnforcedQuery": "", "ServeAgentsHttp": false, "AgentsServerPort": ":3001", "AgentsUseSSL": false, "AgentsUseMutualTLS": false, "AgentSSLSkipVerify": false, "AgentSSLPrivateKeyFile": "", "AgentSSLCertFile": "", "AgentSSLCAFile": "", "AgentSSLValidOUs": [], "UseSSL": false, "UseMutualTLS": false, "SSLSkipVerify": false, "SSLPrivateKeyFile": "", "SSLCertFile": "", "SSLCAFile": "", "SSLValidOUs": [], "URLPrefix": "", "StatusEndpoint": "/api/status", "StatusSimpleHealth": true, "StatusOUVerify": false, "AgentPollMinutes": 60, "UnseenAgentForgetHours": 6, "StaleSeedFailMinutes": 60, "SeedAcceptableBytesDiff": 8192, "PseudoGTIDPattern": "", "PseudoGTIDPatternIsFixedSubstring": false, "PseudoGTIDMonotonicHint": "asc:", "DetectPseudoGTIDQuery": "", "BinlogEventsChunkSize": 10000, "SkipBinlogEventsContaining": [], "ReduceReplicationAnalysisCount": true, "FailureDetectionPeriodBlockMinutes": 1, "RecoveryPeriodBlockSeconds": 0, "RecoveryIgnoreHostnameFilters": [], "RecoverMasterClusterFilters": [ "*" ], "RecoverIntermediateMasterClusterFilters": [ "*" ], "OnFailureDetectionProcesses": [ "echo 'Detected {failureType} on {failureCluster}. Affected replicas: {countSlaves}' >> /tmp/recovery.log" ], "PreFailoverProcesses": [ "echo 'Will recover from {failureType} on {failureCluster}' >> /tmp/recovery.log" ], "PostFailoverProcesses": [ "echo '(for all types) Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log", "/usr/local/bin/orch_hook.sh {failureType} {failureClusterAlias} {failedHost} {successorHost} >> /tmp/orch.log" ], "PostUnsuccessfulFailoverProcesses": [], "PostMasterFailoverProcesses": [ "echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Promoted: {successorHost}:{successorPort}' >> /tmp/recovery.log" ], "PostIntermediateMasterFailoverProcesses": [ "echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log" ], "CoMasterRecoveryMustPromoteOtherCoMaster": true, "DetachLostSlavesAfterMasterFailover": true, "ApplyMySQLPromotionAfterMasterFailover": true, "PreventCrossDataCenterMasterFailover": false, "PreventCro***egionMasterFailover": false, "MasterFailoverDetachSlaveMasterHost": false, "MasterFailoverLostInstancesDowntimeMinutes": 0, "PostponeSlaveRecoveryOnLagMinutes": 0, "OSCIgnoreHostnameFilters": [], "GraphiteAddr": "", "GraphitePath": "", "GraphiteConvertHostnameDotsToUnderscores": true } [root@mgr1 etc]# /etc/init.d/ elasticsearch netconsole network proxysql [root@mgr1 etc]# netstat -nltp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:6032 0.0.0.0:* LISTEN 3326/proxysql tcp 0 0 0.0.0.0:6033 0.0.0.0:* LISTEN 3326/proxysql tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3065/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3295/master tcp6 0 0 :::22 :::* LISTEN 3065/sshd tcp6 0 0 ::1:25 :::* LISTEN 3295/master [root@mgr1 etc]# cd /usr/local/orchestrator/ [root@mgr1 orchestrator]# ll total 19436 -rwxr-xr-x. 1 root root 19884352 Aug 4 14:47 orchestrator -rw-r--r--. 1 root root 5465 Aug 4 14:45 orchestrator-sample.conf.json -rw-r--r--. 1 root root 4668 Aug 4 14:45 orchestrator-sample-sqlite.conf.json drwxr-xr-x. 7 root root 82 Aug 14 17:28 resources [root@mgr1 orchestrator]# /usr/local/orchestrator/orchestrator http 2019-08-14 17:33:12 DEBUG Connected to orchestrator backend: sqlite on /usr/local/orchestrator/orchestrator.sqlite3 2019-08-14 17:33:12 DEBUG Initializing orchestrator 2019-08-14 17:33:12 DEBUG Migrating database schema 2019-08-14 17:33:12 DEBUG Migrated database schema to version [3.1.1] 2019-08-14 17:33:12 INFO Connecting to backend :3306: maxConnections: 128, maxIdleConns: 32 2019-08-14 17:33:12 INFO Starting Discovery 2019-08-14 17:33:12 INFO Registering endpoints 2019-08-14 17:33:12 INFO continuous discovery: setting up 2019-08-14 17:33:12 INFO continuous discovery: starting 2019-08-14 17:33:12 INFO Starting HTTP listener on :3000 2019-08-14 17:33:12 DEBUG Queue.startMonitoring(DEFAULT) 2019-08-14 17:33:13 INFO Not elected as active node; active node: ; polling 2019-08-14 17:33:14 INFO Not elected as active node; active node: ; polling 2019-08-14 17:33:15 INFO Not elected as active node; active node: ; polling 2019-08-14 17:33:16 INFO Not elected as active node; active node: ; polling 2019-08-14 17:33:17 INFO Not elected as active node; active node: ; polling 2019-08-14 17:33:19 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery 2019-08-14 17:33:20 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery 2019-08-14 17:33:21 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery 2019-08-14 17:33:22 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery 2019-08-14 17:33:23 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery 2019-08-14 17:33:24 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery 2019-08-14 17:33:25 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery 2019-08-14 17:33:26 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery [martini] Started GET /web/clusters for 192.168.56.1:54290 [martini] Completed 200 OK in 8.284668ms [martini] Started GET /js/jquery.min.js for 192.168.56.1:54290 [martini] [Static] Serving /js/jquery.min.js [martini] Started GET /js/common.js for 192.168.56.1:54296 [martini] [Static] Serving /js/common.js [martini] Completed 200 OK in 2.074743ms [martini] Started GET /js/jquery.cookie-1.4.1.min.js for 192.168.56.1:54291 [martini] [Static] Serving /js/jquery.cookie-1.4.1.min.js [martini] Completed 200 OK in 2.944014ms [martini] Completed 200 OK in 31.260301ms [martini] Started GET /js/corex.js for 192.168.56.1:54293 [martini] [Static] Serving /js/corex.js [martini] Started GET /js/corex-jquery.js for 192.168.56.1:54294 [martini] [Static] Serving /js/corex-jquery.js [martini] Completed 200 OK in 9.197442ms [martini] Started GET /js/md5.js for 192.168.56.1:54295 [martini] [Static] Serving /js/md5.js [martini] Completed 200 OK in 2.743661ms [martini] Started GET /css/orchestrator.css for 192.168.56.1:54291 [martini] [Static] Serving /css/orchestrator.css [martini] Completed 200 OK in 6.951469ms [martini] Started GET /bootstrap/css/bootstrap.min.css for 192.168.56.1:54296 [martini] [Static] Serving /bootstrap/css/bootstrap.min.css [martini] Completed 200 OK in 64.795256ms [martini] Started GET /js/orchestrator.js for 192.168.56.1:54294 [martini] [Static] Serving /js/orchestrator.js [martini] Completed 200 OK in 6.589851ms [martini] Started GET /js/cluster-analysis-shared.js for 192.168.56.1:54291 [martini] [Static] Serving /js/cluster-analysis-shared.js [martini] Completed 200 OK in 3.819321ms [martini] Started GET /js/custom.js for 192.168.56.1:54295 [martini] [Static] Serving /js/custom.js [martini] Completed 200 OK in 10.738787ms [martini] Started GET /css/custom.css for 192.168.56.1:54290 [martini] [Static] Serving /css/custom.css [martini] Completed 200 OK in 5.566896ms [martini] Completed 200 OK in 64.566793ms [martini] Started GET /images/ajax-loader.gif for 192.168.56.1:54293 [martini] [Static] Serving /images/ajax-loader.gif [martini] Completed 200 OK in 2.594018ms [martini] Started GET /js/clusters.js for 192.168.56.1:54291 [martini] [Static] Serving /js/clusters.js [martini] Completed 200 OK in 2.649993ms [martini] Started GET /js/instance-problems.js for 192.168.56.1:54290 [martini] [Static] Serving /js/instance-problems.js [martini] Started GET /js/bootbox.min.js for 192.168.56.1:54294 [martini] [Static] Serving /js/bootbox.min.js [martini] Completed 200 OK in 2.448253ms [martini] Started GET /bootstrap/js/bootstrap.min.js for 192.168.56.1:54295 [martini] [Static] Serving /bootstrap/js/bootstrap.min.js [martini] Completed 200 OK in 4.693863ms [martini] Completed 200 OK in 15.828717ms [martini] Started GET /bootstrap/fonts/glyphicons-halflings-regular.woff for 192.168.56.1:54290 [martini] [Static] Serving /bootstrap/fonts/glyphicons-halflings-regular.woff [martini] Completed 200 OK in 5.705257ms [martini] Started GET /api/clusters-info for 192.168.56.1:54290 [martini] Completed 200 OK in 2.441157ms [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54295 [martini] Completed 200 OK in 3.624178ms [martini] Started GET /api/clusters-info for 192.168.56.1:54290 [martini] Completed 200 OK in 6.241382ms [martini] Started GET /api/problems for 192.168.56.1:54294 [martini] Started GET /images/orchestrator-logo-32.png for 192.168.56.1:54291 [martini] [Static] Serving /images/orchestrator-logo-32.png [martini] Completed 200 OK in 1.467163ms [martini] Started GET /api/replication-analysis for 192.168.56.1:54290 [martini] Completed 200 OK in 37.670099ms [martini] Started GET /assets/ico/favicon.ico for 192.168.56.1:54291 [martini] Completed 404 Not Found in 1.050475ms [martini] Started GET /api/maintenance for 192.168.56.1:54294 [martini] Completed 200 OK in 3.329422ms [martini] Completed 200 OK in 57.505224ms [martini] Started GET /api/problems for 192.168.56.1:54290 [martini] Completed 200 OK in 7.896307ms [martini] Started GET /web/discover for 192.168.56.1:54290 [martini] Completed 200 OK in 5.045515ms [martini] Started GET /js/discover.js for 192.168.56.1:54290 [martini] [Static] Serving /js/discover.js [martini] Completed 200 OK in 929.59µs [martini] Started GET /api/clusters-info for 192.168.56.1:54290 [martini] Completed 200 OK in 6.435473ms [martini] Started GET /api/problems for 192.168.56.1:54291 [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54294 [martini] Completed 200 OK in 2.087242ms [martini] Completed 200 OK in 25.108654ms [martini] Started GET /api/maintenance for 192.168.56.1:54291 [martini] Completed 200 OK in 3.5533ms [martini] Started GET /api/discover/es2/3306 for 192.168.56.1:54291 2019-08-14 17:33:46 DEBUG Hostname unresolved yet: es2 2019-08-14 17:33:46 DEBUG Cache hostname resolve es2 as es2 2019-08-14 17:33:46 DEBUG Hostname unresolved yet: es3 2019-08-14 17:33:46 DEBUG Cache hostname resolve es3 as es3 2019-08-14 17:33:46 DEBUG Hostname unresolved yet: es3 2019-08-14 17:33:46 DEBUG Cache hostname resolve es3 as es3 [martini] Completed 200 OK in 57.266738ms [martini] Started GET /web/clusters/ for 192.168.56.1:54291 [martini] Completed 200 OK in 4.124483ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 2.95734ms [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54294 [martini] Completed 200 OK in 4.641513ms [martini] Started GET /api/problems for 192.168.56.1:54290 [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 2.410554ms [martini] Completed 200 OK in 25.635704ms [martini] Started GET /api/replication-analysis for 192.168.56.1:54291 [martini] Started GET /api/maintenance for 192.168.56.1:54290 [martini] Completed 200 OK in 1.96617ms [martini] Completed 200 OK in 54.309241ms [martini] Started GET /api/problems for 192.168.56.1:54291 [martini] Completed 200 OK in 10.705548ms [martini] Started GET /web/clusters/ for 192.168.56.1:54291 [martini] Completed 200 OK in 3.347282ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 2.215592ms [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54290 [martini] Completed 200 OK in 6.173855ms [martini] Started GET /api/problems for 192.168.56.1:54294 [martini] Completed 200 OK in 11.36747ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 3.588512ms [martini] Started GET /api/maintenance for 192.168.56.1:54294 [martini] Completed 200 OK in 2.02093ms [martini] Started GET /api/replication-analysis for 192.168.56.1:54291 [martini] Completed 200 OK in 56.420034ms [martini] Started GET /api/problems for 192.168.56.1:54291 [martini] Completed 200 OK in 10.632568ms 2019-08-14 17:33:52 DEBUG Hostname unresolved yet: es1 2019-08-14 17:33:52 DEBUG Cache hostname resolve es1 as es1 [martini] Started GET /web/clusters/ for 192.168.56.1:54291 [martini] Completed 200 OK in 5.428885ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 2.176698ms [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54294 [martini] Completed 200 OK in 1.426532ms [martini] Started GET /api/problems for 192.168.56.1:54290 [martini] Completed 200 OK in 9.368997ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 4.887766ms [martini] Started GET /api/maintenance for 192.168.56.1:54290 [martini] Completed 200 OK in 6.794784ms [martini] Started GET /api/replication-analysis for 192.168.56.1:54291 [martini] Completed 200 OK in 48.708861ms [martini] Started GET /api/problems for 192.168.56.1:54291 [martini] Completed 200 OK in 7.038165ms [martini] Started GET /web/clusters/ for 192.168.56.1:54291 [martini] Completed 200 OK in 4.246108ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 3.952856ms [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54290 [martini] Completed 200 OK in 6.460721ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 4.537462ms [martini] Started GET /api/problems for 192.168.56.1:54294 [martini] Completed 200 OK in 8.731843ms [martini] Started GET /api/replication-analysis for 192.168.56.1:54291 [martini] Started GET /api/maintenance for 192.168.56.1:54294 [martini] Completed 200 OK in 1.999778ms [martini] Completed 200 OK in 86.678785ms [martini] Started GET /api/problems for 192.168.56.1:54291 [martini] Completed 200 OK in 7.292489ms [martini] Started GET /web/clusters/ for 192.168.56.1:54291 [martini] Completed 200 OK in 4.109349ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 3.927826ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 5.524413ms [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54294 [martini] Completed 200 OK in 2.888945ms [martini] Started GET /api/problems for 192.168.56.1:54290 [martini] Started GET /api/replication-analysis for 192.168.56.1:54291 [martini] Completed 200 OK in 34.22444ms [martini] Completed 200 OK in 37.299671ms [martini] Started GET /api/maintenance for 192.168.56.1:54290 [martini] Completed 200 OK in 3.180638ms [martini] Started GET /api/problems for 192.168.56.1:54291 [martini] Completed 200 OK in 10.985714ms [martini] Started GET /web/clusters/ for 192.168.56.1:54291 [martini] Completed 200 OK in 3.097257ms [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 2.137603ms [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54290 [martini] Completed 200 OK in 3.914453ms [martini] Started GET /api/problems for 192.168.56.1:54294 [martini] Started GET /api/clusters-info for 192.168.56.1:54291 [martini] Completed 200 OK in 1.990911ms [martini] Completed 200 OK in 24.679852ms [martini] Started GET /api/replication-analysis for 192.168.56.1:54294 [martini] Started GET /api/maintenance for 192.168.56.1:54291 [martini] Completed 200 OK in 1.934805ms [martini] Completed 200 OK in 45.120922ms [martini] Started GET /api/problems for 192.168.56.1:54294 [martini] Completed 200 OK in 14.302921ms 2019-08-14 17:34:12 INFO auditType:inject-unseen-masters instance::0 cluster: message:Operations: 0 2019-08-14 17:34:12 INFO auditType:forget-unseen-differently-resolved instance::0 cluster: message:Forgotten instances: 0 2019-08-14 17:34:12 INFO auditType:review-unseen-instances instance::0 cluster: message:Operations: 0 2019-08-14 17:34:12 DEBUG kv.SubmitMastersToKvStores, clusterName: , force: false: numPairs: 5 2019-08-14 17:34:12 INFO auditType:forget-unseen instance::0 cluster: message:Forgotten instances: 0 2019-08-14 17:34:12 INFO auditType:resolve-unknown-masters instance::0 cluster: message:Num resolved hostnames: 0 2019-08-14 17:34:12 DEBUG kv.SubmitMastersToKvStores: submitKvPairs: 5 [martini] Started GET /web/clusters/ for 192.168.56.1:54294 [martini] Completed 200 OK in 10.713799ms [martini] Started GET /api/clusters-info for 192.168.56.1:54294 [martini] Completed 200 OK in 4.009104ms [martini] Started GET /api/check-global-recoveries for 192.168.56.1:54291 [martini] Completed 200 OK in 5.498638ms [martini] Started GET /api/problems for 192.168.56.1:54290 [martini] Started GET /api/clusters-info for 192.168.56.1:54294 [martini] Completed 200 OK in 13.613024ms [martini] Completed 200 OK in 3.732243ms [martini] Started GET /api/maintenance for 192.168.56.1:54294 [martini] Completed 200 OK in 1.999938ms [martini] Started GET /api/replication-analysis for 192.168.56.1:54290 [martini] Completed 200 OK in 47.912159ms [martini] Started GET /api/problems for 192.168.56.1:54290 [martini] Completed 200 OK in 18.358529ms 2019-08-14 17:35:12 INFO auditType:review-unseen-instances instance::0 cluster: message:Operations: 0 2019-08-14 17:35:12 INFO auditType:inject-unseen-masters instance::0 cluster: message:Operations: 0 2019-08-14 17:35:12 INFO auditType:forget-unseen instance::0 cluster: message:Forgotten instances: 0 2019-08-14 17:35:12 INFO auditType:forget-unseen-differently-resolved instance::0 cluster: message:Forgotten instances: 0 2019-08-14 17:35:12 INFO auditType:resolve-unknown-masters instance::0 cluster: message:Num resolved hostnames: 0 2019-08-14 17:35:12 DEBUG kv.SubmitMastersToKvStores, clusterName: , force: false: numPairs: 5 2019-08-14 17:35:12 DEBUG kv.SubmitMastersToKvStores: submitKvPairs: 0 ^C
另外有需要云服务器可以了解下创新互联scvps.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。
新闻名称:orchestrator高可用yum快速安装过程-创新互联
分享链接:http://pwwzsj.com/article/cdodeh.html