Commit a6b9d93d authored by remy's avatar remy
Browse files

mapping gitlabpriv repo

parent 6010ca0f
...@@ -8,25 +8,34 @@ This version 2 is a complete redesign from the MBB `portail_admin` project, that ...@@ -8,25 +8,34 @@ This version 2 is a complete redesign from the MBB `portail_admin` project, that
It was a part of MBB `portail_admin` application (wrote mainly by my former wo-worker [_Jimmy Lopez_](https://github.com/Falindir)), but is now a plugin of MBB `portail_admin` and can be used in a standalone version (default). It was a part of MBB `portail_admin` application (wrote mainly by my former wo-worker [_Jimmy Lopez_](https://github.com/Falindir)), but is now a plugin of MBB `portail_admin` and can be used in a standalone version (default).
The first version was a single file; you can see it [here](https://github.com/remyd1/salt_states/tree/master/monitor_salt_json). In this first version, cron files were written into `/var/www/html/exports/YYYYMM/YYYYMMDD_type_of_export.json`. In this version 2, these files are located, by default, in `/var/www/html/exports/YYYYMM/DD/YYYYMMDD_type_of_export.json`. The first version was a single file; you can see it [here](https://github.com/remyd1/salt_states/tree/master/monitor_salt_json). In this first version, cron files were written into `/var/www/html/exports/YYYYMM/YYYYMMDD_export-name.json`, then in `/var/www/html/exports/YYYYMM/YYYYMMDD_HH_export-name.json` and `/var/www/html/exports/YYYYMM/YYYYMMDD_HHMM_export-name.json`. In this version 2, files are located, by default, in `/var/www/html/exports/YYYYMM/DD/YYYYMMDD_HH_MM_export-name.json`.
## Install ## Install
Clone this repository in your document root and create the exports directory for json files `/var/www/html/exports`. The easiest way is to configure your SaltStack master in a Web server and using JSON exports directly. Nevertheless, another option would be to mount on the SaltStack master a remote directory that would be used for JSON exports and web display. Finally, another option would be to configure `salt-api`, but I did not try this for that purpose.
Change TARGET by a minion ID in `check_salt_json` and copy `check_salt_json` to `/usr/local/sbin` and set it to be executable by root. Assuming you are using the easiest way.
Add a crontab, eg: 1. [Required] Clone this repository in your WWW document root and create the exports directory for json files (web user must have access to the files created by the cron):
```cron ```bash
*/30 * * * * /usr/local/sbin/check_salt_json 2>/dev/null mkdir -p /var/www/html/exports
chmod -R 755 /var/www/html/exports
``` ```
2. [Required] Copy `check_salt_json` to `/usr/local/sbin` and set it to be executable by root
3. [Optional] If you are using some additional [jsonreader2 plugins](#Specific-plugin-formulas), edit `check_salt_json`:
- Uncomment all the plugin you use, following "`# UNCOMMENT NEXT LINE(S) TO USE IT`",
- If you use the `check_disks` formula, change `TARGET` by a minion ID in `check_salt_json`. The `TARGET` minion is a minion who knows disk smart status of all physical minions through a mine called `mine_disks`,
4. [Required] Add a crontab, eg: `*/30 * * * * /usr/local/sbin/check_salt_json 2>/dev/null`,
and reload the service:
```bash ```bash
service cron reload service cron reload
``` ```
Install `php-fpm` and reload your web server: 5. [Required] Install all needed packages for this web application:
- `php-fpm` and reload your web server:
```bash ```bash
# for ubuntu/debian with php7.3 and apache: # for ubuntu/debian with php7.3 and apache:
...@@ -36,33 +45,58 @@ a2enconf php7.3-fpm ...@@ -36,33 +45,58 @@ a2enconf php7.3-fpm
systemctl reload apache2 systemctl reload apache2
``` ```
Of course, you also need a mysql/mariadb server: - You also need a mysql/mariadb server:
```bash ```bash
apt install -y mariadb-server php-mysql apt install -y mariadb-server php-mysql
a2enmod proxy_fcgi setenvif a2enmod proxy_fcgi setenvif
``` ```
Install the database using `*.sql` in `sql` directory, create mysql users and set it correctly in `config/Conf.php` and `sql/jsonreader.sql`. 6. [Required] Copy and edit files in `config/` directory:
```bash
cp config/Conf.php.sample config/Conf.php
vi config/Conf.php
cp config/Plugin.php.sample config/Plugin.php
vi config/Plugin.php
```
By default, only the mandatory sections are available (adjust your `$SECTIONS` to the plugins you installed).
7. [Required] Install the database using `*.sql` in `sql` directory; create mysql user and set it correctly in `config/Conf.php` and `sql/jsonreader.sql`.
```bash ```bash
mysql -uroot -p < sql/jsonreader.sql mysql -uroot -p < sql/jsonreader.sql
mysql -uroot -p jsonreader2 < sql/CReader.sql mysql -uroot -p jsonreader2 < sql/CReader.sql
mysql -uroot -p jsonreader2 < sql/ReaderTable.sql mysql -uroot -p jsonreader2 < sql/ReaderTable.sql
mysql -uroot -p jsonreader2 < sql/CReaderPanel.sql
``` ```
For a plugin version of `portail_admin`, please check `config/Plugin.php`. > For a jsonreader2 plugin version of `portail_admin`, please check `config/Plugin.php.sample`.
## Using SaltStack Formula ## Using SaltStack Formula
`Jsonreader2` is using some specific SaltStack formula + basic formula (`test.ping`, `service.status`, `osfinger` ...). `Jsonreader2` is using some specific SaltStack formula + basic formula (`test.ping`, `service.status`, `osfinger` ...).
### Specific formula ## Other JSON reports
To use other JSON reports, you need to clone the plugin (or the SaltStack formula), and then, uncomment or add specific report in `check_salt_json`. Then, add the jsonreader2 plugin title in `config/Plugin.php` (`$SECTION`).
For a new plugin from your own, or an update of jsonreader2, you will need to also add it in the database, and a PHP way to parse the JSON file. Adding it in the database can be done in the WebUI, using the _Misc_ menu items.
### Specific plugin formulas
All these plugins need to be set accordingly to your SaltStack configuration.
- To check disks smart status, take a look at [`check_disks`](https://gitlab.mbb.univ-montp2.fr/saltstack-formulas/check_disks),
- To check services, take a look at the [`check_services`](https://gitlab.mbb.univ-montp2.fr/saltstack-formulas/check_services) formula,
- To check D state processes, take a look at [`get_d_states`](https://gitlab.mbb.univ-montp2.fr/saltstack-formulas/get_d_states) formula,
<!--- To check borgbackup report, take a look at [`borgbackup`](https://gitlab.mbb.univ-montp2.fr/saltstack-formulas/borgbackup) formula,-->
### Other Json reports
- To check services, take a look at the [`check_services`](https://gitlab.mbb.univ-montp2.fr/saltstack-formulas/check_services) formula. It can also use web reports from [`website_checks`](https://gitlab.mbb.univ-montp2.fr/remy/website_checks). To use it, clone it in `/usr/local/website_checks`.
- To check disks, take a look at [`check_disks`](https://gitlab.mbb.univ-montp2.fr/saltstack-formulas/check_disks).
- To check D state processes, take a look at [`get_d_states`](https://gitlab.mbb.univ-montp2.fr/saltstack-formulas) formula.
## Other Json reports ## Logrotate
It can also use web reports from [`website_checks`](https://gitlab.mbb.univ-montp2.fr/remy/website_checks). The `/var/www/html/exports/` may grow quickly. A `logrotate` file is available in `utils/` directory. If you want to use it, copy it to `/etc/logrotate.d/checkjson` and reload the service.
...@@ -11,33 +11,140 @@ if(!JSONREADER2_STANDALONE) { ...@@ -11,33 +11,140 @@ if(!JSONREADER2_STANDALONE) {
} }
require_once JSR_PATH.'/dao/DBquery.php'; require_once JSR_PATH.'/dao/DBquery.php';
require_once JSR_PATH.'/dao/CReaderQuery.php'; require_once '../dao/CReaderQuery.php';
require_once '../dao/CReaderPanelQuery.php';
require_once "../model/CReader.php"; require_once "../model/CReader.php";
require_once "../model/CReaderPanel.php";
require_once JSR_PATH.'/dao/LogDBQuery.php';
require_once JSR_PATH.'/model/Message.php';
require_once JSR_PATH.'/model/Log.php';
$db = new DBquery(); $db = new DBquery();
$action = NULL;
$message = NULL;
$today = date("Y-m-d G:i:s"); $today = date("Y-m-d G:i:s");
$valueReader = []; $ID = -1;
$colorReader = []; $ckey = NULL;
$cvalue = NULL;
$FieldType = NULL;
$color = "#3498db"; // default is blue
foreach ($_POST as $ckey => $cvalue){ if(isset($_POST['action'])) {
if(strpos($ckey, "colorpicker-regularfont-") !== false){ $action = $_POST['action'];
$colorReader[str_replace("colorpicker-regularfont-", "", $ckey)] = $cvalue; } else {
} if(isset($_GET['action'])) {
else { $action = $_GET['action'];
$valueReader[$ckey] = $cvalue; } else {
$action = "";
} }
} }
foreach ($valueReader as $ckey => $cvalue){ if($action == "save") {
$color = $colorReader[$ckey]; $valueReader = [];
$value = str_replace("\n", ";", $cvalue); $colorReader = [];
$value = str_replace("\r", "", $value); $FieldTypeReader = [];
$creader = new CReader(0, $ckey, $value, $color); foreach ($_POST as $ckey => $cvalue){
CReaderQuery::updateCReader($db, $creader); if(strpos($ckey, "colorpicker-regularfont-") !== false) {
$colorReader[str_replace("colorpicker-regularfont-", "", $ckey)] = $cvalue;
} elseif(strpos($ckey, "FieldType-") !== false) {
$FieldTypeReader[str_replace("FieldType-", "", $ckey)] = $cvalue;
} else {
$valueReader[$ckey] = $cvalue;
}
}
foreach ($valueReader as $ckey => $cvalue){
$color = $colorReader[$ckey];
$FieldType = $FieldTypeReader[$ckey];
$value = str_replace("\n", ";", $cvalue);
$value = str_replace("\r", "", $value);
$creader = new CReader(0, $ckey, $value, $FieldType, $color);
CReaderQuery::updateCReader($db, $creader);
}
} else {
if(isset($_POST['CReaderID'])) {
$ID = $_POST['CReaderID'];
} else {
if(isset($_GET['CReaderID'])) {
$ID = $_GET['CReaderID'];
} else {
$ID = -1;
}
}
if(isset($_POST['ckey'])) {
$ckey = $_POST['ckey'];
}
if(isset($_POST['cvalue'])) {
$cvalue = $_POST['cvalue'];
}
if(isset($_POST['FieldType'])) {
$FieldType = $_POST['FieldType'];
}
if(isset($_POST['color'])) {
$color = $_POST['color'];
} else {
if(isset($_POST['DEFAULT_COLOR'])) {
$color = $_POST['DEFAULT_COLOR'];
} elseif(isset($_POST['colorpicker-regularfont-DEFAULT_COLOR'])) {
$color = $_POST['colorpicker-regularfont-DEFAULT_COLOR'];
} else { $color = "#3498db"; }
}
if(isset($_POST['Cpanel'])) {
$CpanelTitle = $_POST['Cpanel'];
}
$creader = new CReader($ID, $ckey, $cvalue, $FieldType, $color);
if($action == "create") {
/* uncomment following and comment header Location at the end of this file bellow for debug */
//var_dump($_POST);
$creader->escape($db);
$message = $db->create($creader);
if(!JSONREADER2_STANDALONE) {
$log = new Log(-1, "admin", $creader->ID, "insert", $message->value, $today, -1);
LogDBQuery::createLog($db, $log);
}
$current_Creader = CReaderQuery::getCReaderWithKey($db, $ckey);
if(!empty($CpanelTitle)) {
$existingCPanels = CReaderPanelQuery::getAllCReaderPanelByTitle($db, $CpanelTitle);
if(!empty($existingCPanels)) {
if (count($existingCPanels) == 1 && is_null($existingCPanels[0]->CReaderID)) {
/* update the NULL (or maybe "NULL") value to the new $CReader->ID */
$existingCPanels[0]->CReaderID = $current_Creader[0]->ID;
$existingCPanels[0]->escape($db);
$message = $db->update($existingCPanels[0]);
if(!JSONREADER2_STANDALONE) {
$log = new Log(-1, "admin", $existingCPanels[0]->ID, "update", $message->value, $today, -1);
LogDBQuery::createLog($db, $log);
}
} else {
$pos = $existingCPanels[0]->PanelPosition;
$creaderpanel = new CReaderPanel($ID, $current_Creader[0]->ID, $CpanelTitle, $pos);
$creaderpanel->escape($db);
$message = $db->create($creaderpanel);
if(!JSONREADER2_STANDALONE) {
$log = new Log(-1, "admin", $creaderpanel->ID, "insert", $message->value, $today, -1);
LogDBQuery::createLog($db, $log);
}
}
}
}
}
if($action == "delete") {
$current_Creader = CReaderQuery::getCReaderByID($db, $ID);
$message = CReaderQuery::deleteCReader($db, $current_Creader[0]);
if(!JSONREADER2_STANDALONE) {
$log = new Log(-1, "admin", $ID, "delete", $message->value, $today, -1);
LogDBQuery::createLog($db, $log);
}
}
} }
header("Location: ".JSR_PATH."/services/jsonreaderConfig.php");
header("Location: ../services/jsonreaderConfig.php");
?> ?>
\ No newline at end of file
<?php
require_once "../config/Plugin.php";
define("PAGE","actionJsonReader");
if(!JSONREADER2_STANDALONE) {
session_start ();
if(!isset($_SESSION['username'])) {
header("Location: ".JSR_PATH."/index.php");
}
}
require_once JSR_PATH.'/dao/DBquery.php';
require_once '../dao/CReaderPanelQuery.php';
require_once '../dao/CReaderQuery.php';
require_once "../model/CReaderPanel.php";
require_once "../model/CReader.php";
require_once JSR_PATH.'/dao/LogDBQuery.php';
require_once JSR_PATH.'/model/Message.php';
require_once JSR_PATH.'/model/Log.php';
$db = new DBquery();
$action = NULL;
$message = NULL;
$today = date("Y-m-d G:i:s");
$ID = -1;
$PanelTitle = NULL;
$position = NULL;
if(isset($_POST['action'])) {
$action = $_POST['action'];
} else {
if(isset($_GET['action'])) {
$action = $_GET['action'];
} else {
$action = "";
}
}
if(isset($_POST['PanelID'])) {
$ID = $_POST['PanelID'];
} else {
if(isset($_GET['PanelID'])) {
$ID = $_GET['PanelID'];
} else {
$ID = -1;
}
}
if(isset($_POST['PanelTitle'])) {
$PanelTitle = $_POST['PanelTitle'];
} else {
if(isset($_GET['PanelTitle'])) {
$PanelTitle = $_GET['PanelTitle'];
}
}
if(isset($_POST['position'])) {
$position = $_POST['position'];
}
$creaderpanel = new CReaderPanel($ID, NULL, $PanelTitle, $position);
if($action == "create") {
/* uncomment following and comment header Location at the end of this file bellow for debug */
//var_dump($_POST);
$creaderpanel->escape($db);
$message = $db->create($creaderpanel);
if(!JSONREADER2_STANDALONE) {
$log = new Log(-1, "admin", $creaderpanel->ID, "insert", $message->value, $today, -1);
LogDBQuery::createLog($db, $log);
}
}
if($action == "delete") {
$CRs = CReaderQuery::getAllCReaderObjects($db);
$Panels = CReaderPanelQuery::getAllCReaderPanelByTitle($db, $PanelTitle);
foreach($Panels as $Panel) {
foreach($CRs as $CR) {
if($CR->ID == $Panel->CReaderID) {
$message = CReaderQuery::deleteCReader($db, $CR);
if(!JSONREADER2_STANDALONE) {
$log = new Log(-1, "admin", $CR->ckey, "delete", $message->value, $today, -1);
LogDBQuery::createLog($db, $log);
}
}
}
}
$message = CReaderPanelQuery::deleteCReaderPanel($db, $PanelTitle);
if(!JSONREADER2_STANDALONE) {
$log = new Log(-1, "admin", $PanelTitle, "delete", $message->value, $today, -1);
LogDBQuery::createLog($db, $log);
}
}
header("Location: ../services/jsonreaderConfig.php");
?>
...@@ -13,6 +13,8 @@ if(!JSONREADER2_STANDALONE) { ...@@ -13,6 +13,8 @@ if(!JSONREADER2_STANDALONE) {
require_once JSR_PATH.'/dao/DBquery.php'; require_once JSR_PATH.'/dao/DBquery.php';
require_once JSR_PATH.'/dao/ReaderTableDBQuery.php'; require_once JSR_PATH.'/dao/ReaderTableDBQuery.php';
require_once "../model/ReaderTable.php"; require_once "../model/ReaderTable.php";
require_once JSR_PATH.'/dao/LogDBQuery.php';
require_once JSR_PATH.'/model/Message.php';
$db = new DBquery(); $db = new DBquery();
...@@ -56,7 +58,7 @@ if(isset($_POST['title'])) { ...@@ -56,7 +58,7 @@ if(isset($_POST['title'])) {
} }
if(isset($_POST['pos'])) { if(isset($_POST['pos'])) {
$pos = int($_POST['pos']); $pos = intval($_POST['pos']);
} else { } else {
$pos = ReaderTableDBQuery::getNextPosReaderTable($db); $pos = ReaderTableDBQuery::getNextPosReaderTable($db);
} }
...@@ -93,4 +95,4 @@ if($action == "create") { ...@@ -93,4 +95,4 @@ if($action == "create") {
} }
} }
header("Location: ".JSR_PATH."/services/manageTableReader.php"); header("Location: ".JSR_PATH."/services/jsonreaderTable.php");
\ No newline at end of file \ No newline at end of file
...@@ -14,10 +14,13 @@ SUBDIR=${CURMONTH}"/"${CURDAY} ...@@ -14,10 +14,13 @@ SUBDIR=${CURMONTH}"/"${CURDAY}
mkdir -p ${JSON_EXPORT_PATH}/${SUBDIR} mkdir -p ${JSON_EXPORT_PATH}/${SUBDIR}
/usr/bin/salt '*' saltutil.sync_modules 1>&2 >/dev/null /usr/bin/salt '*' saltutil.sync_modules 1>&2 >/dev/null
####################### CHECKING SERVICES FROM PILLAR hosts.sls ###################### ####################### CHECKING SERVICES FROM PILLAR hosts.sls #####################
/usr/bin/salt '*' state.sls check_services -t 10 --out=json --static |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_services.json # UNCOMMENT NEXT LINE TO USE IT
#/usr/bin/salt '*' state.sls check_services -t 10 --out=json --static |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_services.json
#####################################################################################
## to debug ### DEBUG
## to debug previous state:
#salt '*' state.sls check_services -t 10 --async -v #salt '*' state.sls check_services -t 10 --async -v
## then, with the jid from previous command : ## then, with the jid from previous command :
## salt-run jobs.lookup_jid <job id> ## salt-run jobs.lookup_jid <job id>
...@@ -26,10 +29,12 @@ mkdir -p ${JSON_EXPORT_PATH}/${SUBDIR} ...@@ -26,10 +29,12 @@ mkdir -p ${JSON_EXPORT_PATH}/${SUBDIR}
sleep 3 sleep 3
############################ CHECKING D STATES PROCESSES ############################# ############################ CHECKING D STATES PROCESSES ############################
salt '*' cmd.run "/usr/local/sbin/get_d_states" --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_dstates.json # UNCOMMENT NEXT LINE TO USE IT
#salt '*' cmd.run "/usr/local/sbin/get_d_states" --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_dstates.json
#####################################################################################
################################# PING CHECK ######################################### ################################# PING CHECK ########################################
/usr/bin/salt '*' test.ping -t 10 --out=json |sort | grep -Ev "[\{\}],?" |awk '{ /usr/bin/salt '*' test.ping -t 10 --out=json |sort | grep -Ev "[\{\}],?" |awk '{
if (NR == 1) { if (NR == 1) {
total="\{\n"$0; total="\{\n"$0;
...@@ -40,54 +45,90 @@ salt '*' cmd.run "/usr/local/sbin/get_d_states" --static --out=json |grep -v " d ...@@ -40,54 +45,90 @@ salt '*' cmd.run "/usr/local/sbin/get_d_states" --static --out=json |grep -v " d
END { END {
print total"\n\}"; print total"\n\}";
}' 2>/dev/null > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_hosts_status.json }' 2>/dev/null > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_hosts_status.json
#####################################################################################
sleep 3 sleep 3
############################### CHECKING STORAGE ##################################### ############################### CHECKING STORAGE ####################################
################################## DISK USAGE ######################################## ################################## DISK USAGE #######################################
/usr/bin/salt '*' disk.percent --out=json --static |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_disks.json /usr/bin/salt '*' disk.percent --out=json --static |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_disks_usage.json
#####################################################################################
############################## DISK SMART STATUS #####################################
# using saltstack mine b/c of timeout issues ############################## DISK SMART STATUS ####################################
nb_lines=`/usr/bin/salt ${TARGET} mine.get '*' 'mine_disks' |grep -v " did not respond. No job will be sent."|wc -l` # using saltstack mine b/c of timeout issues; TARGET is a SaltStack minion knowing the mine. It must be defined above.
/usr/bin/salt ${TARGET} mine.get "*" "mine_disks" |grep -v " did not respond. No job will be sent."| awk -v last=$nb_lines '{ # UNCOMMENT NEXT LINES TO USE IT
if ( NR == 1 ) { #nb_lines=`/usr/bin/salt ${TARGET} mine.get '*' 'mine_disks' |grep -v " did not respond. No job will be sent."|wc -l`
print "{\n\"disks_status\":"; #/usr/bin/salt ${TARGET} mine.get "*" "mine_disks" |grep -v " did not respond. No job will be sent."| awk -v last=$nb_lines '{
} else if (NR == 2) { # if ( NR == 1 ) {
print "\t{" # print "{\n\"disks_status\":";
} else if (NR==last) { # } else if (NR == 2) {
print "\t\t}\n\t}\n}"; # print "\t{"
} else if ($NF ~ ".+:$") { # } else if (NR==last) {
gsub(/[ :]/, "", $0);print "\t\""$0"\":"; # print "\t\t}\n\t}\n}";
} else if ($0 ~ "No such file") { # } else if ($NF ~ ".+:$") {
print "\t{\n\t\t\"/var\": \"ERROR - UNREADABLE\"\n\t},"; # gsub(/[ :]/, "", $0);print "\t\""$0"\":";
} else {print;} # } else if ($0 ~ "No such file") {
}' > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_disks_status.json # print "\t{\n\t\t\"/var\": \"ERROR - UNREADABLE\"\n\t},";
# } else {print;}
################################# ZPOOL STATUS ####################################### #}' > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_disks_status.json
#salt '*' cmd.run "if [ -f /sbin/zfs ] || [ -f /usr/local/sbin/zfs ]; then zpool status -x; fi" --static --out=json | grep -v " did not respond. No job will be sent." > $JSON_EXPORT_PATH/$SUBDIR/"$DATE"_zpool.json #####################################################################################
# Adding d state process check. Otherwise, it will be overloaded with additionnal D state processes
################################# ZPOOL STATUS ######################################
# TODO: using salt zpool module # TODO: using salt zpool module
salt '*' cmd.run "/bin/bash -c \"if \[ -f /sbin/zfs \] || \[ -f /usr/local/sbin/zfs \]; then if \\\[ \\\`/usr/local/sbin/get_d_states count\\\` == 0 \\\]; then zpool status -x; else echo \\\"D states processes have been found\\\"; fi; fi\"" --static --out=json | grep -v " did not respond. No job will be sent." > $JSON_EXPORT_PATH/$SUBDIR/"$DATE"_zpool.json # UNCOMMENT NEXT LINE TO USE IT
#salt '*' cmd.run "/bin/bash -c \"if \[ -f /sbin/zfs \] || \[ -f /usr/local/sbin/zfs \]; then if \\\[ \\\`/usr/local/sbin/get_d_states count\\\` == 0 \\\]; then zpool status -x; else echo \\\"D states processes have been found\\\"; fi; fi\"" --static --out=json | grep -v " did not respond. No job will be sent." > $JSON_EXPORT_PATH/$SUBDIR/"$DATE"_zpool.json
#####################################################################################
sed -i 's#\\#\\\\#g' ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_{services,hosts_status,disks,disks_status,zpool}.json
sed -i 's#\\\\"##g' ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_services.json
## old way, without D state process checking (it may be overloaded with additionnal D state processes):
#salt '*' cmd.run "if [ -f /sbin/zfs ] || [ -f /usr/local/sbin/zfs ]; then zpool status -x; fi" --static --out=json | grep -v " did not respond. No job will be sent." > $JSON_EXPORT_PATH/$SUBDIR/"$DATE"_zpool.json
################################### VERSIONS ######################################## ################################### VERSIONS ########################################
# BASIC CHECKING FROM MINIONS GRAINS
salt '*' grains.get osfinger --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_osversion.json salt '*' grains.get osfinger --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_osversion.json
salt '*' grains.get biosreleasedate --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_biosdate.json salt '*' grains.get biosreleasedate --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_biosdate.json
salt '*' grains.get saltversion --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_saltversion.json salt '*' grains.get saltversion --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_saltversion.json
salt '*' grains.get kernelrelease --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_kernelversion.json salt '*' grains.get kernelrelease --static --out=json |grep -v " did not respond. No job will be sent." > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_kernelversion.json
#####################################################################################
################################### WEBSITES ######################################## ################################### WEBSITES ########################################
# From https://gitlab.mbb.univ-montp2.fr/remy/website_checks # From https://gitlab.mbb.univ-montp2.fr/remy/website_checks
bash /root/website_checks/check_urls.sh check 2>/dev/null 1>&2 # UNCOMMENT NEXT LINES TO USE IT
cp /root/website_checks/workdir/checksums.json ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_websites_checksums.json #bash /usr/local/website_checks/check_urls.sh check 2>/dev/null 1>&2
cp /root/website_checks/workdir/status.json ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_websites_status.json #cp /usr/local/website_checks/workdir/checksums.json ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_websites_checksums.json
#cp /usr/local/website_checks/workdir/status.json ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_websites_status.json
#bash /usr/local/website_checks/check_certs.sh > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_websites_certs.json
#####################################################################################
# to generate host_https_list.txt : # to generate host_https_list.txt :
# salt -G 'roles:https' cmd.run 'hostname -f' --out=yaml |awk '{print $2}' # salt -G 'roles:https' cmd.run 'hostname -f' --out=yaml |awk '{print $2}'
# check /root/un-peu-de-sel/salt_pillar/machines/roles_grains.sls content # check /usr/local/un-peu-de-sel/salt_pillar/machines/roles_grains.sls content
bash /root/website_checks/check_certs.sh > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_websites_certs.json
############################## BORGBACKUP STATUS #####################################
# UNCOMMENT NEXT LINES TO USE IT
#nb_lines=`/usr/bin/salt '*' cmd.run 'if [ -f /var/log/borg/$(date +"%Y%m%d".json) ] ;then if [ -s /var/log/borg/$(date +"%Y%m%d".json) ]; then echo "/var/log/borg/$(date +"%Y%m%d".json):"; sed -e "s|^}|}\n},|" /var/log/borg/$(date +"%Y%m%d".json); else echo "\"/var/log/borg/$(date +"%Y%m%d".json)\": \"borgbackup report file found but is empty!\" \n},"; fi; else echo "\"/var/log/borg/$(date +"%Y%m%d".json)\": \"No borgbackup report file found!\" \n},"; fi' |grep -v " did not respond. No job will be sent."|wc -l`
#/usr/bin/salt "*" cmd.run 'if [ -f /var/log/borg/`date +"%Y%m%d".json` ] ;then if [ -s /var/log/borg/$(date +"%Y%m%d".json) ]; then echo "/var/log/borg/$(date +"%Y%m%d".json):"; sed -e "s|^}|}\n},|" /var/log/borg/`date +"%Y%m%d".json`; else echo "\"/var/log/borg/$(date +"%Y%m%d".json)\": \"borgbackup report file found but is empty!\" \n},"; fi; else echo "\"/var/log/borg/$(date +"%Y%m%d".json)\": \"No borgbackup report file found!\" \n},"; fi' |grep -v " did not respond. No job will be sent." | \
#awk -v last=$nb_lines '{
# if ( NR == 1 ) {
# gsub(/[ :]/, "", $0);print "{\n\"borg_status\": {\n\t\""$0"\": {";
# } else if (NR==last) {
# print "\t\t}\n\t}\n}";
# } else if ($NF ~ ".+:$") {
# if ($0 ~ "/var/log/borg") {
# gsub(/[ :]/, "", $0);print "\t\t\""$0"\":";
# } else {
# gsub(/[ :]/, "", $0);print "\t\""$0"\": {";
# }
# } else if ($0 ~ "No such file") {
# print "\t{\n\t\t\"/var\": \"ERROR - UNREADABLE\"\n\t},";
# } else {print "\t\t" $0;}
#}' > ${JSON_EXPORT_PATH}/${SUBDIR}/"${DATE}"_borg_status.json
#####################################################################################