aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--src/Makefile13
-rw-r--r--src/README21
-rw-r--r--src/README.rh-upload-core54
-rwxr-xr-xsrc/extras/htmlog55
-rwxr-xr-xsrc/extras/rh-upload-core324
-rw-r--r--src/gpgkeys/gpg.template14
-rwxr-xr-xsrc/lib/sos/helpers.py16
-rw-r--r--src/lib/sos/plugins/autofs.py8
-rw-r--r--src/lib/sos/plugins/cluster.py373
-rw-r--r--src/lib/sos/plugins/filesys.py1
-rw-r--r--src/lib/sos/plugins/kernel.py51
-rw-r--r--src/lib/sos/plugins/ldap.py16
-rw-r--r--src/lib/sos/plugins/networking.py3
-rw-r--r--src/lib/sos/plugins/process.py6
-rw-r--r--src/lib/sos/plugins/squid.py6
-rw-r--r--src/lib/sos/plugins/veritas.py5
-rw-r--r--src/lib/sos/plugins/yum.py5
-rw-r--r--src/lib/sos/plugintools.py98
-rwxr-xr-xsrc/lib/sos/policyredhat.py113
-rw-r--r--src/setup.py2
-rw-r--r--src/sos.spec12
-rwxr-xr-xsrc/sosreport25
22 files changed, 882 insertions, 339 deletions
diff --git a/src/Makefile b/src/Makefile
index b1c1d946..1fa477a3 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -4,12 +4,12 @@
NAME = sos
VERSION = $(shell awk '/^%define version / { print $$3 }' sos.spec)
+RELEASE = $(shell awk '/^%define release / { print $$3 }' sos.spec)
REPO = https://sos.108.redhat.com/svn/sos
-SVNTAG = r$(subst .,-,$(VERSION))
+SVNTAG = r$(subst .,-,$(VERSION))_$(RELEASE)
SRCDIR = $(PWD)
TOPDIR = $(PWD)/build/rpm-$(NAME)-$(VERSION)
-
all:
.PHONY: tag-release tarball release install version clean
@@ -59,7 +59,7 @@ release: clean
@echo " "
@echo "The final archive is ./$(NAME)-$(VERSION).tar.bz2."
-install:mo
+install:mo gpgkey
python setup.py install
@rm -rf build/lib
@@ -67,7 +67,7 @@ version:
@echo "The version is $(NAME)-$(VERSION)"
clean:
- @rm -fv *~ .*~ changenew ChangeLog.old $(NAME)-$(VERSION).tar.bz2 sosreport.1.gz
+ @rm -fv *~ .*~ changenew ChangeLog.old $(NAME)-$(VERSION).tar.bz2 sosreport.1.gz gpgkeys/rhsupport.*
@rm -rfv build/*
rpm: mo
@@ -79,6 +79,7 @@ rpm: mo
rm -rf $(NAME)-$(VERSION) ; \
ln -s $(SRCDIR) $(NAME)-$(VERSION) ; \
tar --gzip --exclude=.svn --exclude=svn-commit.tmp --exclude=$(NAME)-$(VERSION)/build --exclude=$(NAME)-$(VERSION)/dist \
+ --exclude gpgkeys/rhsupport.key \
-chSpf $(TOPDIR)/SOURCES/$(NAME)-$(VERSION).tar.gz $(NAME)-$(VERSION) ; \
rm -f $(NAME)-$(VERSION)
@@ -95,3 +96,7 @@ pot:
mo:
find locale/*/LC_MESSAGES -name sos.po -exec python tools/msgfmt.py {} \;
+
+gpgkey:
+ @test -f gpgkeys/rhsupport.pub && echo "GPG key already exists." || \
+ gpg --batch --gen-key gpgkeys/gpg.template
diff --git a/src/README b/src/README
index b65b07ed..f1dbf2c7 100644
--- a/src/README
+++ b/src/README
@@ -12,13 +12,24 @@ To access to the public source code repository for this project run:
svn checkout https://sos.108.redhat.com/svn/sos/trunk sos
(all the following as root)
-to install locally ==> python setup.py install
+to install locally ==> make install
to build an rpm ==> make rpm
See the Makefile.
+Maintainers:
+
+ Navid Sheikhol-Eslami <navid@redhat.com>
+
Contributors:
-Steve Conklin <sconklin@redhat.com>
-Pierre Amadio <pamadio@redhat.com>
-John Berninger <jwb@redhat.com>
-Navid Sheikhol-Eslami <navid@redhat.com>
+
+ Steve Conklin <sconklin@redhat.com>
+ Pierre Amadio <pamadio@redhat.com>
+ John Berninger <jwb@redhat.com>
+
+Thanks to:
+
+ Eva Schaller <eschaller@redhat.com> for providing an Italian translation
+ Marco Ceci <mceci@redhat.com> for helping me out with the cluster plugin
+ Leonardo Macchia <lmacchia@redhat.com> for being my personal regexp generator
+ Imed Chihi <ichihi@redhat.com> for providing Arabic and French translations
diff --git a/src/README.rh-upload-core b/src/README.rh-upload-core
new file mode 100644
index 00000000..c03f5d56
--- /dev/null
+++ b/src/README.rh-upload-core
@@ -0,0 +1,54 @@
+
+rh-upload-core
+
+This is a script provided with the SOS RPM which provides some automation for RHEL kernel vmcore file
+handling. The script is capable of compressing, encrypting, checksumming, splitting and uploading a
+vmcore file by passing a few options and answering two questions.
+
+The script requires that the system (it is run on) has lftp, openssl, and gzip installed otherwise it
+will die.
+
+#### Recommendations ####
+
+Red Hat absolutely recommends that you perform an md5checksum and provide the result file to your
+technician. This only takes a few extra minutes and can save a lot of headaches if the file somehow
+is corrupted during transfer.
+
+It is not recommended to split the core file unless you are on an unreliable or low throughput
+connection. Lftp will automatically resume uploads if there are connection interruptions but in
+some cases splitting the core file into smaller hunks may be desirable.
+
+Because your core could potentially contain data sensitive to your company it is recommended that
+core file encryption is at least considered. While it's not very likely that someone could sniff
+that sensitive information while in transit it is possible. Even with very large core files it takes
+less time to encrypt a compressed core file than it does to actually compress it the core file.
+The 10 or so extra minutes it takes may be worth it.
+
+It's likely that you will be running the script remotely so it is recommended that you launch a
+screen session _before_ kicking off this script. That way if your connection is dropped for some
+reason the script will continue until it's ready to prompt for destination input.
+
+#### Questions ####
+
+A couple of comments regarding choices made in the design of the script.
+
+Why use gzip instead of bzip2?
+While bzip2 /does/ compress tighter than gzip it is significantly slower on large files like
+vmcore files. When compressing a core file speed is more of the essence rather than overall
+compression ratio.
+
+Why not have a switch to provide the ticket number and/or upload destination?
+Good question. While there isn't really a technical reason it seemed more logical to prompt
+for this information at the beginning and end of the script run.
+
+Do I have to use this script to upload kernel vmcore files?
+No you don't have to use it; however, we would prefer you did. It standardizes the core file
+naming convention on the dropbox for one thing. Secondly it allows you to run this script
+and then go work on something else while all of the file operations run; however, if you wish to
+stare blankly at a screen waiting for compression to complete so you can upload that's entirely
+your perogative. ;-)
+
+I have a suggestion for this script who do I give it to?
+Open a ticket with Red Hat support with your request. It will certainly be considered. That said,
+this was written in a shell script with the idea being that anyone could alter it in any way
+they see fit.
diff --git a/src/extras/htmlog b/src/extras/htmlog
index d4680eea..387a5d90 100755
--- a/src/extras/htmlog
+++ b/src/extras/htmlog
@@ -1,16 +1,17 @@
#!/usr/bin/env python
-from optparse import OptionParser, Option
-import time, sys, os
-
-__cmdParser__ = OptionParser()
-__cmdParser__.add_option("-i", "--input", action="append", \
- dest="logfiles", type="string", \
- help="system log to parse")
-__cmdParser__.add_option("-v", "--verbose", action="count", \
- dest="verbosity", \
- help="How obnoxious we're being about telling the user what we're doing.")
-(__cmdLineOpts__, __cmdLineArgs__)=__cmdParser__.parse_args()
+#from optparse import OptionParser, Option
+import time, sys, os, glob
+import getopt
+
+#__cmdParser__ = OptionParser()
+#__cmdParser__.add_option("-i", "--input", action="append",
+# dest="logfiles", type="string", metavar = "FILE",
+# help="system log to parse")
+#__cmdParser__.add_option("-v", "--verbose", action="count",
+# dest="verbosity",
+# help="How obnoxious we're being about telling the user what we're doing.")
+#(__cmdLineOpts__, __cmdLineArgs__)=__cmdParser__.parse_args()
class host_class:
@@ -96,7 +97,6 @@ class host_class:
self.fp().seek(0)
continue
else:
- sys.exit("HOST IS EOF")
return ""
if self.validate_line(toret) or toret == "":
@@ -139,6 +139,7 @@ class logfile_class:
def time_end(self):
pos = self.fp.tell()
bs = 1024
+ if self.size() < bs: bs = self.size()
self.fp.seek(-bs, 2)
line = self.fp.read(bs)
toret = time.strptime(line[line.rfind("\n", 0, bs - 1) + 1:][0:15], "%b %d %H:%M:%S")
@@ -201,15 +202,43 @@ class logfile_class:
print "could not parse time", self.curline
return False
+def usage():
+ print "ciao"
+
+try:
+ opts, args = getopt.getopt(sys.argv[1:], "hi:v", ["help", "input="])
+except getopt.GetoptError:
+ # print help information and exit:
+ usage()
+ sys.exit(2)
+
+cmdline = {}
+cmdline["logfiles"] = []
+
+for o, a in opts:
+ if o == "-v":
+ verbose = True
+ if o in ("-h", "--help"):
+ usage()
+ sys.exit()
+ if o in ("-i", "--input"):
+ print o,a
+ for fname in sys.argv[2:]:
+ cmdline["logfiles"].append(fname)
+ sys.stderr.write("adding log %s\n" % fname)
+
hosts = {}
-for logname in __cmdLineOpts__.logfiles:
+for logname in cmdline["logfiles"]:
log = logfile_class(logname)
hostname = log.hostname()
+ sys.stderr.write("log %s for host %s\n" % (logname, hostname))
if not hosts.has_key(hostname):
hosts[hostname] = host_class()
hosts[hostname].add_log(log)
+sys.stderr.write("finished adding logs\n")
+
#print hosts["moka"].readline()
#print hosts["moka"].readline()
#print "DIRECT", hosts["moka"].fp().fp.tell()
diff --git a/src/extras/rh-upload-core b/src/extras/rh-upload-core
new file mode 100755
index 00000000..8240dd96
--- /dev/null
+++ b/src/extras/rh-upload-core
@@ -0,0 +1,324 @@
+#!/bin/bash
+
+#################################################################################
+# #
+# upload-core #
+# Version - 0.2 #
+# Copyright (c) 2007 Red Hat, Inc. All rights reserved. #
+# #
+# #
+# Written by David Mair #
+# Idea stolen from Chris Snook :-) #
+# #
+# Purpose - To help in the automation and encryption of kernel vmcore files. #
+# Specifically, this script will compress, encrypt, md5sum, #
+# and upload the core file automatically when invoked. #
+# Items are optional and specified by command line switch. #
+# ###############################################################################
+
+## Global directives
+umask 0077
+
+## Declare some variables
+date=`/bin/date -u +%G%m%d%k%M%S | /usr/bin/tr -d ' '`
+destination="dropbox.redhat.com"
+NOUPLOAD=NO
+SPLIT=0
+
+
+## Let's explain the usage
+
+function usage {
+ echo
+ echo
+ echo "Upload-core is a shell script for automating the handling of kernel vmcore files"
+ echo "for system administrators when working with support technicians."
+ echo "The script allows echo the user to compress, checksum, encrypt and upload a core"
+ echo "file with one command."
+ echo
+ echo "Usage: upload-core [-cehnNq] [-s size of hunks in MB] -f filename"
+ echo
+ echo "-c|--checksum : perform an md5 checksum on the file"
+ echo "-e|--encrypt : encrypt the core file"
+ echo "-f|--file : file to act on (required)"
+ echo "-h|--help : show this usage help"
+ echo "-n|--nocompress: do not compress the file (otherwise the file will be gzipped)"
+ echo "-N|--noupload : Do NOT upload to an ftp drop box"
+ echo "-q|--quiet : Do everything I ask and do it quietly"
+ echo "-s|--split : split file into small hunks"
+ echo
+ echo
+ exit 0
+}
+
+if [ $# == 0 ]; then usage
+fi
+
+TEMP=`getopt -o heucnqs:f:N --long help,encrypt,quiet,noupload,checksum,nocompress,split:,file: -n 'upload-core.sh' -- "$@"`
+
+if [ $? != 0 ]; then echo "Options error -- Terminating..." >&2; exit 1; fi
+
+eval set -- "$TEMP"
+
+while true ; do
+ case "$1" in
+ -h|--help) usage;;
+ -e|--encrypt) ENCRYPT=yes; shift;;
+ -N|--noupload) NOUPLOAD=yes; shift;;
+ -c|--checksum) CHECKSUM=yes; shift;;
+ -q|--quiet) QUIET=yes; shift;;
+ -n|--nocompress) NOCOMPRESS=yes; shift;;
+ -s|--split)
+ case $2 in
+ "") echo "You must specify a hunk size." >&2; exit 1 ;;
+ *) SPLIT=$2; shift 2;;
+ esac ;;
+ -F|--force) FORCE=yes; shift;;
+ -f|--file)
+ case "$2" in
+ "") echo "You must specify a file name." >&2; exit 1 ;;
+ *) FILE=$2; shift 2;;
+ esac ;;
+ --) shift; break ;;
+ *) echo "Wrong options or flag specified"; usage;;
+ esac
+done
+
+
+# Okay, let's do some work!
+
+# Ensure the -f||--file flag was passed or die
+
+if test -z $FILE; then echo; echo "The -f or --file flag is required! Terminating."; echo; exit 1; fi
+
+# Validate the file exists or die
+
+if [ ! -f $FILE ]; then echo "Invalid filename or file not found. Terminating."; exit 1; fi
+
+function repeat {
+
+if [ "$QUIET" = "yes" ]; then return
+else
+
+# Let's repeat back to the user what we're doing and make sure this is what they really wanted.
+echo
+if [ "$ENCRYPT" = "yes" ] ; then echo " ## Will encrypt the file.";echo; fi
+
+if [ "$NOUPLOAD" = "yes" ] ; then echo " ## Will NOT upload the file.";echo; fi
+
+if [ "$CHECKSUM" = "yes" ] ; then echo " ## Will checksum the file.";echo; fi
+
+if [ "$SPLIT" != "FALSE" ] ; then echo " ## Will split the file.";echo; fi
+
+if [ "$NOCOMPRESS" = "yes" ] ; then echo -e " ## Will \E[41;30m\033[5mNOT\033[0m compress the file. Are you sure?";echo; else echo "Compressing $FILE"; echo; fi
+fi
+}
+
+
+function warn {
+
+if [ "$QUIET" = "yes" ]; then return
+else
+echo "Please note that depending upon the size of your vmcore file this could take"
+echo "quite some time to run. If the options listed above are correct please"
+echo "press enter. Otherwise press <ctrl>-<c> to exit the program and start again."
+echo
+read IGNORE
+echo
+fi
+}
+
+function ticket {
+echo
+echo "We'll need to use your trouble ticket number for a couple of things. Please"
+echo "enter your trouble ticket number:"
+read ticket_number
+echo
+return
+}
+
+function file_ops {
+# Need to rename the core file before we compress it
+if [ "$QUIET" != "yes" ]; then echo "Renaming core file $ticket_number-$date-vmcore"; fi
+
+new_file=$ticket_number-$date-vmcore
+
+/bin/mv $FILE $new_file
+}
+
+# Compress the file
+function compress {
+
+if [ "$NOCOMPRESS" = "yes" ]
+ then
+ if [ "$QUIET" != "yes"]; then echo "Skipping compression step.";echo; fi
+
+ else
+ if [ "$QUIET" != "yes" ]; then echo "Starting file compression. This will take some time.";echo; fi
+ # Begin compression of file
+ if [ ! /usr/bin/gzip ]; then
+ echo "Cannot find gzip in /usr/bin/. Terminating."; exit 1
+ else
+ /usr/bin/gzip --fast $new_file
+ fi
+
+fi
+
+new_file="$new_file.gz"
+
+}
+
+# Encrypt the file
+function encrypt {
+
+if [ "$ENCRYPT" = "yes" ]
+ then
+ if [ "$QUIET" != "yes" ]; then echo "Beginning file encryption. This should only take a few minutes.";echo; fi
+ # Use the ticket number as the ssl keyfile name
+ if [ ! /usr/bin/openssl ]; then
+ echo "Cannot find openssl in /usr/bin. Terminating."; exit 1
+ fi
+ /usr/bin/openssl rand -out $ticket_number-$date.key -base64 48
+ if [ "$QUIET" != "yes" ]; then
+ echo "You have chosen to encrypt your core file. Your passkey file is"
+ echo "$ticket_number-$date.key. Please attach this key to your ticket."
+ echo
+ fi
+ /usr/bin/openssl aes-128-cbc -in $new_file -out $new_file.aes -pass file:$ticket_number-$date.key
+
+new_file="$new_file.aes"
+
+fi
+}
+
+function checksum {
+
+if [ "$CHECKSUM" = "yes" ]
+ then
+
+ if [ "$QUIET" != "yes" ]; then echo "Beginning $new_file checksum. This should only take a few minutes.";echo; fi
+ if [ ! /usr/bin/md5sum ]; then
+ echo "Cannot find md5sum in /usr/bin. Terminating."; exit 1
+ fi
+ md5result=`/usr/bin/md5sum $new_file|awk '{print $1}'`
+ echo $md5result > $ticket_number-$date-checksum.out
+
+fi
+
+}
+
+function split {
+
+if [ "$SPLIT" = "0" ]; then return; fi
+
+ hunk_size=$SPLIT
+ if (( $hunk_size > 0 )) && (( $hunk_size < 1001 ))
+ then
+ if [ ! /usr/bin/split ]; then
+ echo "Cannot find split in /usr/bin. Terminating."; exit 1
+ fi
+ # We need to make a directory to keep things sane
+ if [ "$QUIET" != "yes" ]; then echo "Creating directory $ticket_number-$date to house file hunks"; fi
+ /bin/mkdir $ticket_number-$date
+ /usr/bin/split -b "$hunk_size"m -d $new_file $ticket_number-$date/$new_file
+ else
+ echo "Invalid hunk size argument. Please enter a number greater than 0 and less than 1001."
+ echo "Terminating."; exit 1
+ fi
+
+
+}
+
+function upload {
+
+if [ "$NOUPLOAD" = "yes" ]; then
+ echo "All file operations are complete. The file(s) is ready to upload at your convenience."; return
+ else
+ echo "All file operation are complete. The file(s) is now ready to be uploaded."
+ echo "Please enter the destination host (default is dropbox.redhat.com)"
+ read destination_input
+ if [ "$destination_input" != "" ]; then destination=$destination_input; fi
+ if [ "$QUIET" != "yes" ]; then
+ echo
+ echo "Okay, uploading to $destination. Depending upon the file size and link throughput"
+ echo "this could take quite a while. When the upload completes this script will provide"
+ echo "additional information such as the md5sum, ssl key file, etc. and then exit."
+ echo "Unless you do not have lftp installed you should be able to monitor upload status."
+ echo "If lftp is not available then this script will exit and the core file(s) will need"
+ echo "to be uploaded to your target system manually. The information indicated above"
+ echo "will still be provided."
+ echo
+ else
+ if [ ! /usr/bin/lftp ]; then
+ # No lftp installed
+ echo "lftp could not be found in /usr/bin. The file(s) will need to be uploaded manually."
+
+ else
+ # Make the lftp script first
+ echo "lftp $destination <<EOF" > lftp_scripts
+ if [ "$SPLIT" = "yes" ]; then
+ echo "lcd $ticket_number" >> lftp_scripts
+ echo "mirror -R" >> lftp_scripts
+ else
+ echo "put $new_file" >> lftp_scripts
+ fi
+ echo "quit 0" >> lftp_scripts
+ echo "EOF" >> lftp_scripts
+ /usr/bin/lftp -f lftp_scripts
+ fi
+ fi
+fi
+}
+
+function closure {
+
+if [ "$ENCRYPT" = "yes" ] ; then
+ echo
+ echo " ## File was encrypted with $ticket_number-$date.key. Please upload the key"
+ echo "to Issue Tracker or send it to your support representative for decryption"
+ echo "after upload.";echo;
+fi
+
+
+if [ "$CHECKSUM" = "yes" ] ; then
+ echo
+ echo "## A checksum was performed on your core file (prior to splitting if you chose"
+ echo "to do so)."
+ echo
+ echo "The checksum results are in:"
+ echo "$ticket_number-$date-checksum.out."
+ echo
+ echo "Please include this when updating your trouble ticket so your support"
+ echo "representative can verify the copy uploaded.";echo
+fi
+
+if [ "$SPLIT" != 0 ]; then
+ echo
+ echo "## Your core file was split and the hunks are in $ticket_number."
+ echo
+fi
+
+echo "This script has completed successfully. If you performed file encryption and/or file"
+echo "splitting you may want to consider removing those files once your support representative"
+echo "confirms receipt. This will reduce the amount of space being utilised on your system."
+echo "It is NOT recommended to remove the gzipped copy of the core file."
+echo
+echo -en "\E[40;31m\033[3mThis would be the only remaining copy of the core file on your system.\033[0m"
+echo
+echo
+echo "It is recommended to retain the core file until your support representative indicates"
+echo "that the problem has been identified and/or resolved."
+
+}
+
+# Run through the functions
+repeat
+warn
+ticket
+file_ops
+compress
+encrypt
+checksum
+split
+upload
+closure
diff --git a/src/gpgkeys/gpg.template b/src/gpgkeys/gpg.template
new file mode 100644
index 00000000..4df26886
--- /dev/null
+++ b/src/gpgkeys/gpg.template
@@ -0,0 +1,14 @@
+%echo Generating key...
+Key-Type: DSA
+Key-Length: 1024
+Subkey-Type: ELG-E
+Subkey-Length: 1024
+Name-Real: Red Hat Support
+Name-Comment: wi
+Name-Email: support@redhat.com
+Expire-Date: 0
+Passphrase: redhat
+%pubring gpgkeys/rhsupport.pub
+%secring gpgkeys/rhsupport.key
+%commit
+%echo done
diff --git a/src/lib/sos/helpers.py b/src/lib/sos/helpers.py
index bcdee6fb..bc9c51ff 100755
--- a/src/lib/sos/helpers.py
+++ b/src/lib/sos/helpers.py
@@ -25,7 +25,7 @@
"""
helper functions used by sosreport and plugins
"""
-import os, popen2, fcntl, select, itertools, sys, commands
+import os, popen2, fcntl, select, itertools, sys, commands, logging
from time import time
from tempfile import mkdtemp
@@ -60,6 +60,19 @@ def makeNonBlocking(afd):
def sosGetCommandOutput(command):
""" Execute a command and gather stdin, stdout, and return status.
"""
+ soslog = logging.getLogger('sos')
+
+ # Log if binary is not runnable or does not exist
+ for path in os.environ["PATH"].split(":"):
+ cmdfile = command.strip("(").split()[0]
+ # handle both absolute or relative paths
+ if ( ( not os.path.isabs(cmdfile) and os.access(os.path.join(path,cmdfile), os.X_OK) ) or \
+ ( os.path.isabs(cmdfile) and os.access(cmdfile, os.X_OK) ) ):
+ break
+ else:
+ soslog.log(logging.VERBOSE, "binary '%s' does not exist or is not runnable" % cmdfile)
+ return (127, "", 0)
+
stime = time()
inpipe, pipe = os.popen4(command, 'r')
inpipe.close()
@@ -123,4 +136,3 @@ def sosRelPath(path1, path2, sep=os.path.sep, pardir=os.path.pardir):
if not common:
return path2 # leave path absolute if nothing at all in common
return sep.join( [pardir]*len(u1) + u2 )
-
diff --git a/src/lib/sos/plugins/autofs.py b/src/lib/sos/plugins/autofs.py
index 85cb72a7..2cb22767 100644
--- a/src/lib/sos/plugins/autofs.py
+++ b/src/lib/sos/plugins/autofs.py
@@ -47,7 +47,7 @@ class autofs(sos.plugintools.PluginBase):
debugout=self.doRegexFindAll(r"^daemon.*\s+(\/var.*)", "/etc/syslog.conf")
for i in debugout:
return i
-
+
def setup(self):
self.addCopySpec("/etc/auto*")
self.addCopySpec("/etc/sysconfig/autofs")
@@ -58,6 +58,10 @@ class autofs(sos.plugintools.PluginBase):
self.collectExtOutput("/bin/egrep -e 'automount|pid.*nfs' /proc/mounts")
self.collectExtOutput("/bin/mount | egrep -e 'automount|pid.*nfs'")
self.collectExtOutput("/sbin/chkconfig --list autofs")
- self.addCopySpec(self.getdaemondebug())
+
+ # if debugging to file is enabled, grab that file too
+ daemon_debug_file = self.getdaemondebug()
+ if daemon_debug_file:
+ self.addCopySpec(daemon_debug_file)
return
diff --git a/src/lib/sos/plugins/cluster.py b/src/lib/sos/plugins/cluster.py
index 81b71f5a..6067f9c4 100644
--- a/src/lib/sos/plugins/cluster.py
+++ b/src/lib/sos/plugins/cluster.py
@@ -26,9 +26,19 @@ class cluster(sos.plugintools.PluginBase):
def checkenabled(self):
# enable if any related package is installed
- for pkg in [ "ccs", "cman", "cman-kernel", "magma", "magma-plugins",
- "rgmanager", "fence", "dlm", "dlm-kernel", "gulm",
- "GFS", "GFS-kernel", "lvm2-cluster" ]:
+ rhelver = self.cInfo["policy"].rhelVersion()
+ if rhelver == 4:
+ pkgs_to_check = [ "ccs", "cman", "cman-kernel", "magma", "magma-plugins",
+ "rgmanager", "fence", "dlm", "dlm-kernel", "gulm",
+ "GFS", "GFS-kernel", "lvm2-cluster" ]
+ elif rhelver == 5:
+ pkgs_to_check = [ "rgmanager", "luci", "ricci", "system-config-cluster",
+ "gfs-utils", "gnbd", "kmod-gfs", "kmod-gnbd", "lvm2-cluster" ]
+ else:
+ # can't guess what RHEL version we are running
+ pkgs_to_check = []
+
+ for pkg in pkgs_to_check:
if self.cInfo["policy"].pkgByName(pkg) != None:
return True
@@ -42,160 +52,160 @@ class cluster(sos.plugintools.PluginBase):
return False
def has_gfs(self):
- fp = open("/proc/mounts","r")
- for line in fp.readlines():
- mntline = line.split(" ")
- if mntline[2] == "gfs":
- return True
- fp.close()
- return False
+ try:
+ if len(self.doRegexFindAll(r'^\S+\s+\S+\s+gfs\s+.*$', "/etc/mtab")):
+ return True
+ except:
+ return False
def diagnose(self):
- try: rhelver = self.cInfo["policy"].pkgDictByName("redhat-release")[0]
- except: rhelver = None
-
- # FIXME: we should only run tests specific for the version, now just do them all regardless
- if rhelver == "4" or True:
- # check that kernel module packages are installed for
- # running kernel version
- pkgs_check = [ "dlm-kernel" , "cman-kernel" ]
- if self.has_gfs(): pkgs_check.append("GFS-kernel")
-
- for pkgname in pkgs_check:
- found = 0
- if self.cInfo["policy"].isKernelSMP() and self.cInfo["policy"].pkgByName(pkgname):
- found = 1 # -one- means package found (but not for same version as kernel)
- pkgname = pkgname + "-smp"
-
- for pkg in self.cInfo["policy"].allPkgsByName(pkgname):
- found = 1
- for reqline in self.cInfo["policy"].pkgRequires("%s-%s-%s" % (pkg[0],pkg[1],pkg[2]) ):
- if reqline[0] == 'kernel-smp' and reqline[1] == '=':
- reqline[2] = reqline[2] + "smp"
-
- if ( (not self.cInfo["policy"].isKernelSMP() and reqline[0] == 'kernel') or (self.cInfo["policy"].isKernelSMP() and reqline[0] == 'kernel-smp') ) and reqline[1] == '=' and reqline[2] == self.cInfo["policy"].kernelVersion():
- found = 2
- break
-
- if found == 0:
- self.addDiagnose("required package is missing: %s" % pkgname)
- elif found == 1:
- self.addDiagnose("required package is not installed for current kernel: %s" % pkgname)
-
- # check if the minimum set of packages is installed
- # for RHEL4 RHCS(ccs, cman, cman-kernel, magma, magma-plugins, (dlm, dlm-kernel) || gulm, perl-Net-Telnet, rgmanager, fence)
- # RHEL4 GFS (GFS, GFS-kernel, ccs, lvm2-cluster, fence)
-
- for pkg in [ "ccs", "cman", "magma", "magma-plugins", "perl-Net-Telnet", "rgmanager", "fence" ]:
- if self.cInfo["policy"].pkgByName(pkg) == None:
- self.addDiagnose("required package is missing: %s" % pkg)
-
+ rhelver = self.cInfo["policy"].rhelVersion()
+
+ # check if the minimum set of packages is installed
+ # for RHEL4 RHCS(ccs, cman, cman-kernel, magma, magma-plugins, (dlm, dlm-kernel) || gulm, perl-Net-Telnet, rgmanager, fence)
+ # RHEL4 GFS (GFS, GFS-kernel, ccs, lvm2-cluster, fence)
+
+ kernel_pkgs = []
+ pkgs_check = []
+ mods_check = []
+ serv_check = []
+
+ if rhelver == 4:
+ kernel_pkgs = [ "dlm-kernel" , "cman-kernel" ]
+ if self.has_gfs():
+ kernel_pkgs.append("GFS-kernel")
+ pkgs_check.extend( [ "ccs", "cman", "magma", "magma-plugins", "perl-Net-Telnet", "rgmanager", "fence" ] )
+ mods_check.extend( [ "cman", "dlm" ] )
+ if self.has_gfs():
+ mods_check.append("gfs")
+ serv_check.extend( [ "cman", "ccsd", "rgmanager", "fenced" ] )
+ if self.has_gfs():
+ serv_check.extend( ["gfs", "clvmd"] )
+ elif rhelver == 5:
+ if self.has_gfs():
+ kernel_pkgs.append("kmod-gfs")
+ pkgs_check.extend ( [ "cman", "perl-Net-Telnet", "rgmanager" ] )
+ mods_check.extend( [ "dlm" ] )
+ if self.has_gfs():
+ mods_check.extend( ["gfs", "gfs2"] )
+ serv_check.extend( [ "cman", "rgmanager" ] )
+ if self.has_gfs():
+ serv_check.extend( ["gfs", "clvmd"] )
+
+ # check that kernel module packages are installed for
+ # running kernel version
+
+ for pkgname in kernel_pkgs:
+ found = 0
+
+ # FIXME: make sure it works on RHEL4
+ for pkg in self.cInfo["policy"].allPkgsByNameRegex( "^" + pkgname ):
+ found = 1
+ for reqline in pkg.dsFromHeader('requirename'):
+ reqline = reqline[0].split()
+ try:
+ if reqline[1].startswith("kernel") and reqline[2] == "=" and reqline[3] == self.cInfo["policy"].kernelVersion():
+ found = 2
+ break
+ except IndexError:
+ pass
+
+ if found == 0:
+ self.addDiagnose("required kernel package is missing: %s" % pkgname)
+ elif found == 1:
+ self.addDiagnose("required package is not installed for current kernel: %s" % pkgname)
+
+ for pkg in pkgs_check:
+ if self.cInfo["policy"].pkgByName(pkg) == None:
+ self.addDiagnose("required package is missing: %s" % pkg)
+
+ if rhelver == "4":
# (dlm, dlm-kernel) || gulm
if not ((self.cInfo["policy"].pkgByName("dlm") and self.cInfo["policy"].pkgByName("dlm-kernel")) or self.cInfo["policy"].pkgByName("gulm")):
self.addDiagnose("required packages are missing: (dlm, dlm-kernel) || gulm")
- # let's make modules are loaded
- mods_check = [ "cman", "dlm" ]
- if self.has_gfs(): mods_check.append("gfs")
- for module in mods_check:
- if len(self.fileGrep("^%s " % module, "/proc/modules")) == 0:
- self.addDiagnose("required package is present but not loaded: %s" % module)
-
- # check if all the needed daemons are active at sosreport time
- # check if they are started at boot time in RHEL4 RHCS (cman, ccsd, rgmanager, fenced)
- # and GFS (gfs, ccsd, clvmd, fenced)
- checkserv = [ "cman", "ccsd", "rgmanager", "fenced" ]
- if self.has_gfs(): checkserv.extend( ["gfs", "clvmd"] )
- for service in checkserv:
- status, output = commands.getstatusoutput("/sbin/service %s status" % service)
- if status:
- self.addDiagnose("service %s is not running" % service)
- else:
- # service is running, extra sanity checks
- if service == "fenced":
- # also make sure fenced is a registered cluster service
- try:
- if len(self.fileGrep("^Fence Domain:\W", "/proc/cluster/services")) == 0:
- self.addDiagnose("fencing service is not registered with cman")
- except:
- pass
- elif service == "rgmanager":
- # also make sure rgmanager is a registered cluster service
- try:
- if len(self.fileGrep("^User:\W*usrm::manager", "/proc/cluster/services")) == 0:
- self.addDiagnose("rgmanager is not registered with cman")
- except:
- pass
-
- if not self.cInfo["policy"].runlevelDefault() in self.cInfo["policy"].runlevelByService(service):
- self.addDiagnose("service %s is not started in default runlevel" % service)
-
- # FIXME: any cman service whose state != run ?
- # Fence Domain: "default" 2 2 run -
-
- # is cluster quorate
- if not self.is_cluster_quorate():
- self.addDiagnose("cluster node is not quorate")
-
- # if there is no cluster.conf, diagnose() finishes here.
- try:
- os.stat("/etc/cluster/cluster.conf")
- except:
- self.addDiagnose("/etc/cluster/cluster.conf is missing")
- return
-
- # setup XML xpath context
- xml = libxml2.parseFile("/etc/cluster/cluster.conf")
- xpathContext = xml.xpathNewContext()
-
- # check fencing (warn on no fencing)
- if len(xpathContext.xpathEval("/cluster/clusternodes/clusternode[not(fence/method/device)]")):
- if self.has_gfs():
- self.addDiagnose("one or more nodes have no fencing agent configured: fencing is required for GFS to work")
- else:
- self.addDiagnose("one or more nodes have no fencing agent configured: the cluster infrastructure might not work as intended")
-
- # check fencing (warn on manual)
- if len(xpathContext.xpathEval("/cluster/clusternodes/clusternode[/cluster/fencedevices/fencedevice[@agent='fence_manual']/@name=fence/method/device/@name]")):
- self.addDiagnose("one or more nodes have manual fencing agent configured (data integrity is not guaranteed)")
-
- # if fence_ilo or fence_drac, make sure acpid is not running
- hostname = commands.getoutput("/bin/uname -n").split(".")[0]
- if len(xpathContext.xpathEval('/cluster/clusternodes/clusternode[@name = "%s" and /cluster/fencedevices/fencedevice[@agent="fence_rsa" or @agent="fence_drac"]/@name=fence/method/device/@name]' % hostname )):
- status, output = commands.getstatusoutput("/sbin/service acpid status")
- if status == 0 or self.cInfo["policy"].runlevelDefault() in self.cInfo["policy"].runlevelByService("acpid"):
- self.addDiagnose("acpid is enabled, this may cause problems with your fencing method.")
-
- # check for fs exported via nfs without nfsid attribute
- if len(xpathContext.xpathEval("/cluster/rm/service//fs[not(@fsid)]/nfsexport")):
- self.addDiagnose("one or more nfs export do not have a fsid attribute set.")
-
- # cluster.conf file version and the in-memory cluster configuration version matches
- status, cluster_version = commands.getstatusoutput("cman_tool status | grep 'Config version'")
- if not status: cluster_version = cluster_version[16:]
- else: cluster_version = None
- conf_version = xpathContext.xpathEval("/cluster/@config_version")[0].content
-
- if status == 0 and conf_version != cluster_version:
- self.addDiagnose("cluster.conf and in-memory configuration version differ (%s != %s)" % (conf_version, cluster_version) )
-
- # make sure the first part of the lock table matches the cluster name
- # and that the locking protocol is sane
- cluster_name = xpathContext.xpathEval("/cluster/@name")[0].content
-
- for fs in self.fileGrep(r'^[^#][/\w]*\W*[/\w]*\W*gfs', "/etc/fstab"):
- # for each gfs entry
- fs = fs.split()
-
- lockproto = self.get_gfs_sb_field(fs[0], "sb_lockproto")
- if lockproto and lockproto != self.get_locking_proto():
- self.addDiagnose("gfs mountpoint (%s) is using the wrong locking protocol (%s)" % (fs[0], lockproto) )
-
- locktable = self.get_gfs_sb_field(fs[0], "sb_locktable")
- try: locktable = locktable.split(":")[0]
- except: continue
- if locktable != cluster_name:
- self.addDiagnose("gfs mountpoint (%s) is using the wrong locking table" % fs[0])
+ for module in mods_check:
+ if len(self.fileGrep("^%s\s+" % module, "/proc/modules")) == 0:
+ self.addDiagnose("required module is not loaded: %s" % module)
+
+ # check if all the needed daemons are active at sosreport time
+ # check if they are started at boot time in RHEL4 RHCS (cman, ccsd, rgmanager, fenced)
+ # and GFS (gfs, ccsd, clvmd, fenced)
+
+ for service in serv_check:
+ status, output = commands.getstatusoutput("/sbin/service %s status &> /dev/null" % service)
+ if status != 0:
+ self.addDiagnose("service %s is not running" % service)
+
+ if not self.cInfo["policy"].runlevelDefault() in self.cInfo["policy"].runlevelByService(service):
+ self.addDiagnose("service %s is not started in default runlevel" % service)
+
+ # FIXME: missing important cman services
+ # FIXME: any cman service whose state != run ?
+ # Fence Domain: "default" 2 2 run -
+
+ # is cluster quorate
+ if not self.is_cluster_quorate():
+ self.addDiagnose("cluster node is not quorate")
+
+ # if there is no cluster.conf, diagnose() finishes here.
+ try:
+ os.stat("/etc/cluster/cluster.conf")
+ except:
+ self.addDiagnose("/etc/cluster/cluster.conf is missing")
+ return
+
+ # setup XML xpath context
+ xml = libxml2.parseFile("/etc/cluster/cluster.conf")
+ xpathContext = xml.xpathNewContext()
+
+ # check fencing (warn on no fencing)
+ if len(xpathContext.xpathEval("/cluster/clusternodes/clusternode[not(fence/method/device)]")):
+ if self.has_gfs():
+ self.addDiagnose("one or more nodes have no fencing agent configured: fencing is required for GFS to work")
+ else:
+ self.addDiagnose("one or more nodes have no fencing agent configured: the cluster infrastructure might not work as intended")
+
+ # check fencing (warn on manual)
+ if len(xpathContext.xpathEval("/cluster/clusternodes/clusternode[/cluster/fencedevices/fencedevice[@agent='fence_manual']/@name=fence/method/device/@name]")):
+ self.addDiagnose("one or more nodes have manual fencing agent configured (data integrity is not guaranteed)")
+
+ # if fence_ilo or fence_drac, make sure acpid is not running
+ hostname = commands.getoutput("/bin/uname -n").split(".")[0]
+ if len(xpathContext.xpathEval('/cluster/clusternodes/clusternode[@name = "%s" and /cluster/fencedevices/fencedevice[@agent="fence_rsa" or @agent="fence_drac"]/@name=fence/method/device/@name]' % hostname )):
+ status, output = commands.getstatusoutput("/sbin/service acpid status")
+ if status == 0 or self.cInfo["policy"].runlevelDefault() in self.cInfo["policy"].runlevelByService("acpid"):
+ self.addDiagnose("acpid is enabled, this may cause problems with your fencing method.")
+
+ # check for fs exported via nfs without nfsid attribute
+ if len(xpathContext.xpathEval("/cluster/rm/service//fs[not(@fsid)]/nfsexport")):
+ self.addDiagnose("one or more nfs export do not have a fsid attribute set.")
+
+ # cluster.conf file version and the in-memory cluster configuration version matches
+ status, cluster_version = commands.getstatusoutput("cman_tool status | grep 'Config version'")
+ if not status: cluster_version = cluster_version[16:]
+ else: cluster_version = None
+ conf_version = xpathContext.xpathEval("/cluster/@config_version")[0].content
+
+ if status == 0 and conf_version != cluster_version:
+ self.addDiagnose("cluster.conf and in-memory configuration version differ (%s != %s)" % (conf_version, cluster_version) )
+
+ # make sure the first part of the lock table matches the cluster name
+ # and that the locking protocol is sane
+ cluster_name = xpathContext.xpathEval("/cluster/@name")[0].content
+
+ for fs in self.fileGrep(r'^[^#][/\w]*\W*[/\w]*\W*gfs', "/etc/fstab"):
+ # for each gfs entry
+ fs = fs.split()
+ lockproto = self.get_gfs_sb_field(fs[0], "sb_lockproto")
+ if lockproto and lockproto != self.get_locking_proto():
+ self.addDiagnose("gfs mountpoint (%s) is using the wrong locking protocol (%s)" % (fs[0], lockproto) )
+
+ locktable = self.get_gfs_sb_field(fs[0], "sb_locktable")
+ try: locktable = locktable.split(":")[0]
+ except: continue
+ if locktable != cluster_name:
+ self.addDiagnose("gfs mountpoint (%s) is using the wrong locking table" % fs[0])
def setup(self):
self.collectExtOutput("/sbin/fdisk -l")
@@ -204,12 +214,12 @@ class cluster(sos.plugintools.PluginBase):
self.addCopySpec("/etc/cluster")
self.collectExtOutput("/usr/sbin/rg_test test /etc/cluster/cluster.conf")
self.addCopySpec("/proc/cluster")
- self.collectExtOutput("/usr/bin/cman_tool status")
- self.collectExtOutput("/usr/bin/cman_tool services")
- self.collectExtOutput("/usr/bin/cman_tool -af nodes")
- self.collectExtOutput("/usr/bin/ccs_tool lsnode")
- self.collectExtOutput("/usr/bin/openais-cfgtool -s")
- self.collectExtOutput("/usr/bin/clustat")
+ self.collectExtOutput("cman_tool status")
+ self.collectExtOutput("cman_tool services")
+ self.collectExtOutput("cman_tool -af nodes")
+ self.collectExtOutput("ccs_tool lsnode")
+ self.collectExtOutput("openais-cfgtool -s")
+ self.collectExtOutput("clustat")
self.collectExtOutput("/sbin/ipvsadm -L")
@@ -232,20 +242,24 @@ class cluster(sos.plugintools.PluginBase):
self.addCopySpec("/var/log/messages")
def do_lockdump(self):
- try:
- fp = open("/proc/cluster/services","r")
- except:
- return
- for line in fp.readlines():
- if line[0:14] == "DLM Lock Space":
- try:
- lockspace = line.split('"')[1]
- except:
- pass
- else:
- commands.getstatusoutput("echo %s > /proc/cluster/dlm_locks" % lockspace)
- self.collectOutputNow("cat /proc/cluster/dlm_locks", root_symlink = "dlm_locks_%s" % lockspace)
- fp.close()
+ status, output = commands.getstatusoutput("cman_tool services")
+ if status:
+ # command somehow failed
+ return False
+
+ import re
+
+ rhelver = self.get_redhat_release()
+
+ if rhelver == "4":
+ regex = r'^DLM Lock Space:\s*"([^"]*)".*$'
+ elif rhelver == "5Server" or rhelver == "5Client":
+ regex = r'^dlm\s+[^\s]+\s+([^\s]+)\s.*$'
+
+ reg=re.compile(regex,re.MULTILINE)
+ for lockspace in reg.findall(output):
+ commands.getstatusoutput("echo %s > /proc/cluster/dlm_locks" % lockspace)
+ self.collectOutputNow("cat /proc/cluster/dlm_locks", root_symlink = "dlm_locks_%s" % lockspace)
def get_locking_proto(self):
# FIXME: what's the best way to find out ?
@@ -253,15 +267,11 @@ class cluster(sos.plugintools.PluginBase):
return "lock_gulm"
def do_gfslockdump(self):
- fp = open("/proc/mounts","r")
- for line in fp.readlines():
- mntline = line.split(" ")
- if mntline[2] == "gfs":
- self.collectExtOutput("/sbin/gfs_tool lockdump %s" % mntline[1], root_symlink = "gfs_lockdump_" + self.mangleCommand(mntline[1]) )
- fp.close()
-
- def do_rgmgr_bt(self):
- # FIXME: threads backtrace
+ for mntpoint in self.doRegexFindAll(r'^\S+\s+([^\s]+)\s+gfs\s+.*$', "/proc/mounts"):
+ self.collectExtOutput("/sbin/gfs_tool lockdump %s" % mntpoint, root_symlink = "gfs_lockdump_" + self.mangleCommand(mntpoint) )
+
+ def do_rgmanager_bt(self):
+ # FIXME: threads backtrace via SIGALRM
return
def postproc(self):
@@ -269,8 +279,7 @@ class cluster(sos.plugintools.PluginBase):
return
def is_cluster_quorate(self):
- # FIXME: use self.fileGrep() instead
- output = commands.getoutput("/bin/cat /proc/cluster/status | grep '^Membership state: '")
+ output = commands.getoutput("cman_tool status | grep '^Membership state: '")
try:
if output[18:] == "Cluster-Member":
return True
diff --git a/src/lib/sos/plugins/filesys.py b/src/lib/sos/plugins/filesys.py
index 3ae4da51..73bd3d88 100644
--- a/src/lib/sos/plugins/filesys.py
+++ b/src/lib/sos/plugins/filesys.py
@@ -28,7 +28,6 @@ class filesys(sos.plugintools.PluginBase):
self.addCopySpec("/etc/mdadm.conf")
self.collectExtOutput("/bin/df -al", root_symlink = "df")
- self.collectExtOutput("/usr/sbin/lsof -b +M -n -l", root_symlink = "lsof")
self.collectExtOutput("/bin/mount -l", root_symlink = "mount")
self.collectExtOutput("/sbin/blkid")
diff --git a/src/lib/sos/plugins/kernel.py b/src/lib/sos/plugins/kernel.py
index 3640f3fa..a5ffd855 100644
--- a/src/lib/sos/plugins/kernel.py
+++ b/src/lib/sos/plugins/kernel.py
@@ -49,22 +49,24 @@ class kernel(sos.plugintools.PluginBase):
def setup(self):
self.collectExtOutput("/bin/uname -a", root_symlink = "uname")
self.moduleFile = self.collectOutputNow("/sbin/lsmod", root_symlink = "lsmod")
+
if self.isOptionEnabled('modinfo'):
- runcmd = ""
- for kmod in commands.getoutput('/sbin/lsmod | /bin/cut -f1 -d" " 2>/dev/null | /bin/grep -v Module 2>/dev/null').split('\n'):
- if '' != kmod.strip():
- runcmd = runcmd + " " + kmod
- if len(runcmd):
- self.collectExtOutput("/sbin/modinfo " + runcmd)
+ runcmd = ""
+ for kmod in commands.getoutput('/sbin/lsmod | /bin/cut -f1 -d" " 2>/dev/null | /bin/grep -v Module 2>/dev/null').split('\n'):
+ if '' != kmod.strip():
+ runcmd = runcmd + " " + kmod
+ if len(runcmd):
+ self.collectExtOutput("/sbin/modinfo " + runcmd)
+
self.collectExtOutput("/sbin/sysctl -a")
self.collectExtOutput("/sbin/ksyms")
self.addCopySpec("/sys/module/*/parameters")
self.addCopySpec("/proc/filesystems")
self.addCopySpec("/proc/ksyms")
self.addCopySpec("/proc/slabinfo")
+ # FIXME: kver should have this stuff cached somewhere
kver = commands.getoutput('/bin/uname -r')
- depfile = "/lib/modules/%s/modules.dep" % (kver,)
- self.addCopySpec(depfile)
+ self.addCopySpec("/lib/modules/%s/modules.dep" % kver)
self.addCopySpec("/etc/conf.modules")
self.addCopySpec("/etc/modules.conf")
self.addCopySpec("/etc/modprobe.conf")
@@ -72,25 +74,18 @@ class kernel(sos.plugintools.PluginBase):
self.addCopySpec("/proc/cmdline")
self.addCopySpec("/proc/driver")
self.addCopySpec("/proc/sys/kernel/tainted")
- # FIXME: both RHEL4 and RHEL5 don't need sysrq to be enabled to trigger via sysrq-trigger
- if self.isOptionEnabled('sysrq') and os.access("/proc/sysrq-trigger", os.W_OK) and os.access("/proc/sys/kernel/sysrq", os.R_OK):
- sysrq_state = commands.getoutput("/bin/cat /proc/sys/kernel/sysrq")
- commands.getoutput("/bin/echo 1 > /proc/sys/kernel/sysrq")
- for key in ['m', 'p', 't']:
- commands.getoutput("/bin/echo %s > /proc/sysrq-trigger" % (key,))
- commands.getoutput("/bin/echo %s > /proc/sys/kernel/sysrq" % (sysrq_state,))
- # No need to grab syslog here if we can't trigger sysrq, so keep this
- # inside the if
- self.addCopySpec("/var/log/messages")
-
+
+ if self.isOptionEnabled('sysrq') and os.access("/proc/sysrq-trigger", os.W_OK):
+ for key in ['m', 'p', 't']:
+ commands.getoutput("/bin/echo %s > /proc/sysrq-trigger" % (key,))
+ self.addCopySpec("/var/log/messages")
+
return
- def analyze(self):
- infd = open("/proc/modules", "r")
- modules = infd.readlines()
- infd.close()
+ def diagnose(self):
- for modname in modules:
+ infd = open("/proc/modules", "r")
+ for modname in infd.readlines():
modname=modname.split(" ")[0]
modinfo_srcver = commands.getoutput("/sbin/modinfo -F srcversion %s" % modname)
if not os.access("/sys/module/%s/srcversion" % modname, os.R_OK):
@@ -99,13 +94,17 @@ class kernel(sos.plugintools.PluginBase):
sys_srcver = infd.read().strip("\n")
infd.close()
if modinfo_srcver != sys_srcver:
- self.addAlert("Loaded module %s differs from the one present on the file-system")
+ self.addDiagnose("Loaded module %s differs from the one present on the file-system")
# this would be a good moment to check the module's signature
# but at the moment there's no easy way to do that outside of
# the kernel. i will probably need to write a C lib (derived from
# the kernel sources to do this verification.
+ infd.close()
+
+ def analyze(self):
+
savedtaint = os.path.join(self.cInfo['dstroot'], "/proc/sys/kernel/tainted")
infd = open(savedtaint, "r")
line = infd.read()
@@ -114,12 +113,10 @@ class kernel(sos.plugintools.PluginBase):
if (line != "0"):
self.addAlert("Kernel taint flag is <%s>\n" % line)
-
infd = open(self.moduleFile, "r")
modules = infd.readlines()
infd.close()
- #print(modules)
for tainter in self.taintList:
p = re.compile(tainter['regex'])
for line in modules:
diff --git a/src/lib/sos/plugins/ldap.py b/src/lib/sos/plugins/ldap.py
index 59ab53fc..47ac0612 100644
--- a/src/lib/sos/plugins/ldap.py
+++ b/src/lib/sos/plugins/ldap.py
@@ -40,17 +40,19 @@ class ldap(sos.plugintools.PluginBase):
def diagnose(self):
# Validate ldap client options
ldapopts=self.get_ldap_opts()
- try:
- os.stat(ldapopts["TLS_CACERTDIR"])
- except:
- self.addDiagnose("%s does not exist and can cause connection issues "+
- "involving TLS" % ldapopts["TLS_CACERTDIR"])
+ if ldapopts.has_key("TLS_CACERTDIR"):
+ try:
+ os.stat(ldapopts["TLS_CACERTDIR"])
+ except:
+ self.addDiagnose("%s does not exist and can cause connection issues involving TLS" % ldapopts["TLS_CACERTDIR"])
def setup(self):
self.addCopySpec("/etc/ldap.conf")
self.addCopySpec("/etc/openldap")
- self.addCopySpec(self.get_slapd_debug())
- return
+
+ slapd_debug_file = self.get_slapd_debug()
+ if slapd_debug_file:
+ self.addCopySpec(slapd_debug_file)
def postproc(self):
self.doRegexSub("/etc/ldap.conf", r"(\s*bindpw\s*)\S+", r"\1***")
diff --git a/src/lib/sos/plugins/networking.py b/src/lib/sos/plugins/networking.py
index 1dcb0375..aaf78234 100644
--- a/src/lib/sos/plugins/networking.py
+++ b/src/lib/sos/plugins/networking.py
@@ -53,13 +53,12 @@ class networking(sos.plugintools.PluginBase):
self.addCopySpec("/etc/resolv.conf")
ifconfigFile=self.collectOutputNow("/sbin/ifconfig -a", root_symlink = "ifconfig")
self.collectExtOutput("/sbin/route -n", root_symlink = "route")
- self.collectExtOutput("/sbin/ipchains -nvL")
self.collectIPTable("filter")
self.collectIPTable("nat")
self.collectIPTable("mangle")
self.collectExtOutput("/bin/netstat -s")
self.collectExtOutput("/bin/netstat -neopa", root_symlink = "netstat")
- # FIXME: we should collect "ip route table <tablename>" for all tables (from "ip rule")
+ self.collectExtOutput("/sbin/ip route show table all")
self.collectExtOutput("/sbin/ip link")
self.collectExtOutput("/sbin/ip address")
self.collectExtOutput("/sbin/ifenslave -a")
diff --git a/src/lib/sos/plugins/process.py b/src/lib/sos/plugins/process.py
index ce4ef227..d0243b46 100644
--- a/src/lib/sos/plugins/process.py
+++ b/src/lib/sos/plugins/process.py
@@ -25,6 +25,7 @@ class process(sos.plugintools.PluginBase):
self.collectExtOutput("/bin/ps auxwwwm")
self.collectExtOutput("/bin/ps alxwww")
self.collectExtOutput("/usr/bin/pstree", root_symlink = "pstree")
+ self.collectExtOutput("/usr/sbin/lsof -b +M -n -l", root_symlink = "lsof")
return
def find_mountpoint(s):
@@ -50,12 +51,9 @@ class process(sos.plugintools.PluginBase):
# this should never happen...
pass
else:
+ # still D after 0.1 * range(1,5) seconds
dpids.append(int(line[1]))
- # FIXME: for each hung PID, list file-systems from /proc/$PID/fd
-# for pid in dpids:
-# realpath
-
if len(dpids):
self.addDiagnose("one or more processes are in state D (sosreport might hang)")
diff --git a/src/lib/sos/plugins/squid.py b/src/lib/sos/plugins/squid.py
index fdd3b8cf..7e0c3376 100644
--- a/src/lib/sos/plugins/squid.py
+++ b/src/lib/sos/plugins/squid.py
@@ -18,10 +18,8 @@ import os
class squid(sos.plugintools.PluginBase):
"""squid related information
"""
- def checkenabled(self):
- if self.cInfo["policy"].pkgByName("squid") != None or os.path.exists("/etc/squid/squid.conf"):
- return True
- return False
+ files = [ "/etc/squid/squid.conf" ]
+ packages = [ "squid" ]
def setup(self):
self.addCopySpec("/etc/squid/squid.conf")
diff --git a/src/lib/sos/plugins/veritas.py b/src/lib/sos/plugins/veritas.py
index a66b11af..a041c81b 100644
--- a/src/lib/sos/plugins/veritas.py
+++ b/src/lib/sos/plugins/veritas.py
@@ -70,9 +70,8 @@ class veritas(sos.plugintools.PluginBase):
"VRTSvlic"]
def checkenabled(self):
- for i in commands.getoutput("/bin/rpm -qa | /bin/grep -i VRTS"):
- pkg = i.split('-')[0]
- if self.cInfo["policy"].pkgByName(pkg) != None:
+ for pkgname in self.package_list:
+ if self.cInfo["policy"].allPkgsByName(pkgname):
return True
return False
diff --git a/src/lib/sos/plugins/yum.py b/src/lib/sos/plugins/yum.py
index 0cdf0740..0f0d049e 100644
--- a/src/lib/sos/plugins/yum.py
+++ b/src/lib/sos/plugins/yum.py
@@ -30,10 +30,7 @@ class yum(sos.plugintools.PluginBase):
# repo sanity checking
# TODO: elaborate/validate actual repo files, however this directory should
# be empty on RHEL 5+ systems.
- try: rhelver = self.cInfo["policy"].pkgDictByName("redhat-release")[0]
- except: rhelver = None
-
- if rhelver == "5" or True:
+ if self.cInfo["policy"].rhelVersion() == 5:
if len(os.listdir("/etc/yum.repos.d/")):
self.addAlert("/etc/yum.repos.d/ contains additional repository "+
"information and can cause rpm conflicts.")
diff --git a/src/lib/sos/plugintools.py b/src/lib/sos/plugintools.py
index eb53ca86..da323401 100644
--- a/src/lib/sos/plugintools.py
+++ b/src/lib/sos/plugintools.py
@@ -65,6 +65,9 @@ class PluginBase:
self.time_start = None
self.time_stop = None
+ self.packages = []
+ self.files = []
+
self.soslog = logging.getLogger('sos')
# get the option list into a dictionary
@@ -181,7 +184,7 @@ class PluginBase:
dstslname = sosRelPath(self.cInfo['rptdir'], abspath)
self.copiedDirs.append({'srcpath':srcpath, 'dstpath':dstslname, 'symlink':"yes", 'pointsto':os.path.abspath(srcpath+'/'+afile) })
else:
- self.soslog.log(logging.VERBOSE2, "copying symlink %s" % srcpath)
+ self.soslog.log(logging.VERBOSE3, "copying symlink %s" % srcpath)
try:
dstslname, abspath = self.__copyFile(srcpath)
self.copiedFiles.append({'srcpath':srcpath, 'dstpath':dstslname, 'symlink':"yes", 'pointsto':link})
@@ -206,6 +209,7 @@ class PluginBase:
else:
# This is not a directory or a symlink
tdstpath, abspath = self.__copyFile(srcpath)
+ self.soslog.log(logging.VERBOSE3, "copying file %s" % srcpath)
self.copiedFiles.append({'srcpath':srcpath, 'dstpath':tdstpath, 'symlink':"no"}) # save in our list
return abspath
@@ -259,6 +263,9 @@ class PluginBase:
def addCopySpecLimit(self,fname,sizelimit = None):
"""Add a file specification (with limits)
"""
+ if not ( fname and len(fname) ):
+ self.soslog.warning("invalid file path")
+ return False
files = glob.glob(fname)
files.sort()
cursize = 0
@@ -272,52 +279,21 @@ class PluginBase:
""" Add a file specification (can be file, dir,or shell glob) to be
copied into the sosreport by this module
"""
+ if not ( copyspec and len(copyspec) ):
+ self.soslog.warning("invalid file path")
+ return False
# Glob case handling is such that a valid non-glob is a reduced glob
for filespec in glob.glob(copyspec):
self.copyPaths.append(filespec)
- def copyFileGlob(self, srcglob):
- """ Deprecated - please modify modules to use addCopySpec()
- """
- sys.stderr.write("Warning: thecopyFileGlob() function has been deprecated. Please")
- sys.stderr.write("use addCopySpec() instead. Calling addCopySpec() now.")
- self.addCopySpec(srcglob)
-
- def copyFileOrDir(self, srcpath):
- """ Deprecated - please modify modules to use addCopySpec()
- """
- sys.stderr.write("Warning: the copyFileOrDir() function has been deprecated. Please\n")
- sys.stderr.write("use addCopySpec() instead. Calling addCopySpec() now.\n")
- raise ValueError
- #self.addCopySpec(srcpath)
-
- def runExeInd(self, exe):
- """ Deprecated - use callExtProg()
- """
- sys.stderr.write("Warning: the runExeInd() function has been deprecated. Please use\n")
- sys.stderr.write("the callExtProg() function. This should only be called\n")
- sys.stderr.write("if collect() is overridden.")
- pass
-
def callExtProg(self, prog):
""" Execute a command independantly of the output gathering part of
sosreport
"""
- # Log if binary is not runnable or does not exist
- if not os.access(prog.split()[0], os.X_OK):
- self.soslog.log(logging.VERBOSE, "binary '%s' does not exist or is not runnable" % prog.split()[0])
-
# pylint: disable-msg = W0612
status, shout, runtime = sosGetCommandOutput(prog)
return status
- def runExe(self, exe):
- """ Deprecated - use collectExtOutput()
- """
- sys.stderr.write("Warning: the runExe() function has been deprecated. Please use\n")
- sys.stderr.write("the collectExtOutput() function.\n")
- pass
-
def collectExtOutput(self, exe, suggest_filename = None, root_symlink = None):
"""
Run a program and collect the output
@@ -362,10 +338,6 @@ class PluginBase:
""" Execute a command and save the output to a file for inclusion in
the report
"""
- # First check to make sure the binary exists and is runnable.
- if not os.access(exe.split()[0], os.X_OK):
- self.soslog.log(logging.VERBOSE, "binary '%s' does not exist or is not runnable, trying anyways" % exe.split()[0])
-
# FIXME: we should have a timeout or we may end waiting forever
# pylint: disable-msg = W0612
@@ -379,7 +351,7 @@ class PluginBase:
if not os.path.isdir(os.path.dirname(outfn)):
os.mkdir(os.path.dirname(outfn))
- if not (status == 127 or status == 32512):
+ if not (status == 127 or status == 32512): # if not command_not_found
outfd = open(outfn, "w")
if len(shout): outfd.write(shout+"\n")
outfd.close()
@@ -486,23 +458,35 @@ class PluginBase:
try:
self.doCopyFileOrDir(path)
except SystemExit:
- raise SystemExit
+ if threaded:
+ return SystemExit
+ else:
+ raise SystemExit
except KeyboardInterrupt:
- raise KeyboardInterrupt
+ if threaded:
+ return KeyboardInterrupt
+ else:
+ raise KeyboardInterrupt
except Exception, e:
- self.soslog.log(logging.VERBOSE, "error copying from pathspec %s (%s), traceback follows:" % (path,e))
- self.soslog.log(logging.VERBOSE, traceback.format_exc())
+ self.soslog.log(logging.VERBOSE2, "error copying from pathspec %s (%s), traceback follows:" % (path,e))
+ self.soslog.log(logging.VERBOSE2, traceback.format_exc())
for (prog,suggest_filename,root_symlink) in self.collectProgs:
self.soslog.debug("collecting output of '%s'" % prog)
try:
self.collectOutputNow(prog, suggest_filename, root_symlink)
except SystemExit:
- raise SystemExit
+ if threaded:
+ return SystemExit
+ else:
+ raise SystemExit
except KeyboardInterrupt:
- raise KeyboardInterrupt
+ if threaded:
+ return KeyboardInterrupt
+ else:
+ raise KeyboardInterrupt
except:
- self.soslog.log(logging.VERBOSE, "error collection output of '%s', traceback follows:" % prog)
- self.soslog.log(logging.VERBOSE, traceback.format_exc())
+ self.soslog.log(logging.VERBOSE2, "error collection output of '%s', traceback follows:" % prog)
+ self.soslog.log(logging.VERBOSE2, traceback.format_exc())
self.time_stop = time()
@@ -520,6 +504,16 @@ class PluginBase:
""" This function can be overidden to let the plugin decide whether
it should run or not.
"""
+ # some files or packages have been specified for this package
+ if len(self.files) or len(self.packages):
+ for file in self.files:
+ if os.path.exists(files):
+ return True
+ for pkgname in self.packages:
+ if self.cInfo["policy"].pkgByName(pkgname):
+ return True
+ return False
+
return True
def defaultenabled(self):
@@ -592,8 +586,11 @@ class PluginBase:
html = html + "<p>Commands Executed:<br><ul>\n"
# convert file name to relative path from our root
for cmd in self.executedCommands:
- cmdOutRelPath = sosRelPath(self.cInfo['rptdir'], self.cInfo['cmddir'] + "/" + cmd['file'])
- html = html + '<li><a href="%s">%s</a></li>\n' % (cmdOutRelPath, cmd['exe'])
+ if cmd["file"] and len(cmd["file"]):
+ cmdOutRelPath = sosRelPath(self.cInfo['rptdir'], self.cInfo['cmddir'] + "/" + cmd['file'])
+ html = html + '<li><a href="%s">%s</a></li>\n' % (cmdOutRelPath, cmd['exe'])
+ else:
+ html = html + '<li>%s</li>\n' % (cmd['exe'])
html = html + "</ul></p>\n"
# Alerts
@@ -609,4 +606,3 @@ class PluginBase:
html = html + self.customText + "</p>\n"
return html
-
diff --git a/src/lib/sos/policyredhat.py b/src/lib/sos/policyredhat.py
index d2139afa..54890b00 100755
--- a/src/lib/sos/policyredhat.py
+++ b/src/lib/sos/policyredhat.py
@@ -26,8 +26,7 @@ from sos.helpers import *
import random
import re
import md5
-
-SOME_PATH = "/tmp/SomePath"
+import rpm
#class SosError(Exception):
# def __init__(self, code, message):
@@ -37,11 +36,22 @@ SOME_PATH = "/tmp/SomePath"
# def __str__(self):
# return 'Sos Error %s: %s' % (self.code, self.message)
+def memoized(function):
+ ''' function decorator to allow caching of return values
+ '''
+ function.cache={}
+ def f(*args):
+ try:
+ return function.cache[args]
+ except KeyError:
+ result = function.cache[args] = function(*args)
+ return result
+ return f
class SosPolicy:
"This class implements various policies for sos"
def __init__(self):
- #print "Policy init"
+ self.report_file = None
return
def setCommons(self, commons):
@@ -55,41 +65,61 @@ class SosPolicy:
#print "validating %s" % pluginpath
return True
+ def pkgProvides(self, name):
+ pkg = self.pkgByName(name)
+ return pkg['providename']
+
def pkgRequires(self, name):
- # FIXME: we're relying on rpm to sort the output list
+ pkg = self.pkgByName(name)
+ return pkg['requirename']
+
cmd = "/bin/rpm -q --requires %s" % (name)
return [requires[:-1].split() for requires in os.popen(cmd).readlines()]
def allPkgsByName(self, name):
- # FIXME: we're relying on rpm to sort the output list
- cmd = "/bin/rpm --qf '%%{N} %%{V} %%{R} %%{ARCH}\n' -q %s" % (name,)
- pkgs = os.popen(cmd).readlines()
- return [pkg[:-1].split() for pkg in pkgs if pkg.startswith(name)]
+ return self.allPkgs("name", name)
+
+ def allPkgsByNameRegex(self, regex_name):
+ reg = re.compile(regex_name)
+ return [pkg for pkg in self.allPkgs() if reg.match(pkg['name'])]
def pkgByName(self, name):
# TODO: do a full NEVRA compare and return newest version, best arch
try:
# lame attempt at locating newest
- pkg = self.allPkgsByName(name)[-1]
- except IndexError:
- pkg = None
-
- return pkg
+ return self.allPkgsByName(name)[-1]
+ except:
+ pass
+ return None
def pkgDictByName(self, name):
+ # FIXME: what does this do?
pkgName = self.pkgByName(name)
if pkgName and len(pkgName) > len(name):
return pkgName[len(name)+1:].split("-")
else:
return None
+ def allPkgs(self, ds = None, value = None):
+ if not hasattr(self, "rpm_ts"):
+ self.rpm_ts = rpm.TransactionSet()
+ if ds and value:
+ mi = self.rpm_ts.dbMatch(ds, value)
+ else:
+ mi = self.rpm_ts.dbMatch()
+ return [pkg for pkg in mi]
+
def runlevelByService(self, name):
ret = []
try:
for tabs in commands.getoutput("/sbin/chkconfig --list %s" % name).split():
- (runlevel, onoff) = tabs.split(":")
- if onoff == "on":
- ret.append(int(runlevel))
+ try:
+ (runlevel, onoff) = tabs.split(":", 1)
+ except:
+ pass
+ else:
+ if onoff == "on":
+ ret.append(int(runlevel))
except:
pass
return ret
@@ -105,9 +135,21 @@ class SosPolicy:
def kernelVersion(self):
return commands.getoutput("/bin/uname -r").strip("\n")
+ def rhelVersion(self):
+ try:
+ pkgname = self.pkgByName("redhat-release")["version"]
+ if pkgname[0] == "4":
+ return 4
+ elif pkgname in [ "5Server", "5Client" ]:
+ return 5
+ except: pass
+ return False
+
def isKernelSMP(self):
- if self.kernelVersion()[-3:]=="smp": return True
- else: return False
+ if commands.getoutput("/bin/uname -v").split()[1] == "SMP":
+ return True
+ else:
+ return False
def pkgNVRA(self, pkg):
fields = pkg.split("-")
@@ -166,6 +208,9 @@ class SosPolicy:
# FIXME: use python internal command
os.system("/bin/mv %s %s" % (aliasdir, self.cInfo['dstroot']))
+ # FIXME: encrypt using gnupg
+ # gpg --trust-model always --batch --keyring /usr/share/sos/rhsupport.pub --no-default-keyring --compress-level 0 --encrypt --recipient support@redhat.com --output filename.gpg filename.tar
+
# add last 6 chars from md5sum to file name
fp = open(tarballName, "r")
md5out = md5.new(fp.read()).hexdigest()
@@ -187,5 +232,37 @@ class SosPolicy:
print _("Please send this file to your support representative.")
sys.stdout.write("\n")
+ self.report_file = tarballName
+
return
+ def uploadResults(self):
+ # make sure a report exists
+ if not self.report_file:
+ return False
+
+ # make sure it's readable
+ try:
+ fp = open(self.report_file, "r")
+ except:
+ return False
+
+ try:
+ from ftplib import FTP
+ upload_name = os.path.basename(self.report_file)
+
+ ftp = FTP('dropbox.redhat.com')
+ ftp.login()
+ ftp.cwd("/incoming")
+ ftp.set_pasv(True)
+ ftp.storbinary('STOR %s' % upload_name, fp)
+ ftp.quit()
+ except:
+ print _("There was a problem uploading your report to Red Hat support.")
+ else:
+ print _('Your report was uploaded successfully with name:')
+ print " " + upload_name
+ print
+ print _("Please communicate this name to your support representative.")
+
+ fp.close()
diff --git a/src/setup.py b/src/setup.py
index bef14704..b64c3593 100644
--- a/src/setup.py
+++ b/src/setup.py
@@ -9,6 +9,6 @@ setup(
packages = ['sos', 'sos.plugins'],
scripts = [],
package_dir = {'': 'lib',},
- data_files = [ ('/usr/sbin', ['sosreport', 'extras/sysreport/sysreport.legacy']), ('/usr/share/sysreport', ['extras/sysreport/text.xsl', 'extras/sysreport/functions', 'extras/sysreport/sysreport-fdisk']), ('/usr/share/man/man1', ['sosreport.1']), ('/usr/share/locale/en', []), ('/usr/share/locale/it', []), ('/usr/share/locale/en/LC_MESSAGES', ['locale/en/LC_MESSAGES/sos.mo']), ('/usr/share/locale/it/LC_MESSAGES', ['locale/it/LC_MESSAGES/sos.mo']), ('/usr/share/locale/fr/LC_MESSAGES', ['locale/fr/LC_MESSAGES/sos.mo']), ('/usr/share/locale/ar/LC_MESSAGES', ['locale/ar/LC_MESSAGES/sos.mo'])
+ data_files = [ ('/usr/sbin', ['sosreport', 'extras/sysreport/sysreport.legacy']), ('/usr/bin', ['extras/rh-upload-core']), ('/usr/share/sysreport', ['extras/sysreport/text.xsl', 'extras/sysreport/functions', 'extras/sysreport/sysreport-fdisk']), ('/usr/share/man/man1', ['sosreport.1']), ('/usr/share/locale/en', []), ('/usr/share/locale/it', []), ('/usr/share/locale/en/LC_MESSAGES', ['locale/en/LC_MESSAGES/sos.mo']), ('/usr/share/locale/it/LC_MESSAGES', ['locale/it/LC_MESSAGES/sos.mo']), ('/usr/share/locale/fr/LC_MESSAGES', ['locale/fr/LC_MESSAGES/sos.mo']), ('/usr/share/locale/ar/LC_MESSAGES', ['locale/ar/LC_MESSAGES/sos.mo'])
]
)
diff --git a/src/sos.spec b/src/sos.spec
index 9fb3e8dd..7ed24b30 100644
--- a/src/sos.spec
+++ b/src/sos.spec
@@ -2,7 +2,7 @@
%define name sos
%define version 1.7
-%define release 6
+%define release 8
%define _localedir %_datadir/locale
@@ -47,15 +47,23 @@ rm -rf ${RPM_BUILD_ROOT}
%files
%defattr(-,root,root,-)
%{_sbindir}/sosreport
+/usr/bin/rh-upload-core
/usr/sbin/sysreport
/usr/sbin/sysreport.legacy
/usr/share/sysreport
%{python_sitelib}/sos/
%{_mandir}/man1/sosreport.1*
%{_localedir}/*/LC_MESSAGES/sos.mo
-%doc README TODO LICENSE ChangeLog
+%doc README README.rh-upload-core TODO LICENSE ChangeLog
%changelog
+* Wed Aug 13 2007 Navid Sheikhol-Eslami <navid at redhat dot com> - 1.7-8
+- added README.rh-upload-core
+
+* Mon Aug 13 2007 Navid Sheikhol-Eslami <navid at redhat dot com> - 1.7-7
+- Resolves: bz251927 SOS errata needs to be respin to match 4.6 code base
+- added extras/rh-upload-core script from David Mair <dmair@redhat.com>
+
* Mon Aug 9 2007 Navid Sheikhol-Eslami <navid at redhat dot com> - 1.7-6
- more language fixes
- added arabic, italian and french
diff --git a/src/sosreport b/src/sosreport
index 76fc8206..daf43116 100755
--- a/src/sosreport
+++ b/src/sosreport
@@ -84,9 +84,6 @@ signal.signal(signal.SIGTERM, exittermhandler)
## FIXME: Need to figure out how to IPC with child threads in case of
## multiple SIGTERMs.
-# for debugging
-__raisePlugins__ = 0
-
class OptionParser_extended(OptionParser):
def print_help(self):
OptionParser.print_help(self)
@@ -133,9 +130,15 @@ __cmdParser__.add_option("-k", action="extend", \
__cmdParser__.add_option("-a", "--alloptions", action="store_true", \
dest="usealloptions", default=False, \
help="enable all options for loaded plugins")
+__cmdParser__.add_option("-u", "--upload", action="store_true", \
+ dest="upload", default=False, \
+ help="upload the report to Red Hat support")
__cmdParser__.add_option("-v", "--verbose", action="count", \
dest="verbosity", \
help="increase verbosity")
+__cmdParser__.add_option("--debug", action="count", \
+ dest="debug", \
+ help="enabling debugging")
__cmdParser__.add_option("--no-progressbar", action="store_false", \
dest="progressbar", default=True, \
help="do not display a progress bar.")
@@ -294,6 +297,13 @@ class XmlReport:
outfn.write(self.doc.serialize(None,1))
outfn.close()
+# if debugging is enabled, allow plugins to raise exceptions
+
+if __cmdLineOpts__.debug:
+ __raisePlugins__ = 1
+else:
+ __raisePlugins__ = 0
+
def sosreport():
# pylint: disable-msg = R0912
# pylint: disable-msg = R0914
@@ -528,7 +538,7 @@ def sosreport():
raw_input(_("""This utility will collect some detailed information about the
hardware and setup of your Red Hat Enterprise Linux system.
The information is collected and an archive is packaged under
-/tmp, which you can send to a support rappresentative.
+/tmp, which you can send to a support representative.
Red Hat will use this information for diagnostic purposes ONLY
and it will be considered confidential information.
@@ -537,7 +547,7 @@ No changes will be made to your system.
Press ENTER to continue, or CTRL-C to quit.
"""))
- except KeyboardInterrupt:
+ except:
print
sys.exit(0)
@@ -723,8 +733,6 @@ Press ENTER to continue, or CTRL-C to quit.
rfd.close()
- # Collect any needed user information (name, etc)
-
# Call the postproc method for each plugin
for plugname, plug in loadedplugins:
try:
@@ -737,7 +745,10 @@ Press ENTER to continue, or CTRL-C to quit.
policy.packageResults()
# delete gathered files
os.system("/bin/rm -rf %s" % dstroot)
+
# automated submission will go here
+ if __cmdLineOpts__.upload:
+ policy.uploadResults()
# Close all log files and perform any cleanup
logging.shutdown()