r5321 - trunk/mint/python/mint/plumage
by croberts@fedoraproject.org
Author: croberts
Date: 2012-04-19 19:50:45 +0000 (Thu, 19 Apr 2012)
New Revision: 5321
Modified:
trunk/mint/python/mint/plumage/REPORTING-README
Log:
updating REPORTING-README
Modified: trunk/mint/python/mint/plumage/REPORTING-README
===================================================================
--- trunk/mint/python/mint/plumage/REPORTING-README 2012-04-19 19:18:41 UTC (rev 5320)
+++ trunk/mint/python/mint/plumage/REPORTING-README 2012-04-19 19:50:45 UTC (rev 5321)
@@ -2,10 +2,18 @@
Description:
-This feature allows Cumin to use data from the Plumage (ODS) database to generate long-duration visualizations of grid system behavior.
-In order for cumin-report to be able to access the Plumage ODS, the Grid installation must include the Plumage plug-in and it must be turned on. The only configuration necessary to access it can be made by changing the plumage_host and plumage_port configuration parameters in /etc/cumin/cumin.conf under the [report] section of the configuration. The default values are plumage_host: localhost plumage_port: 27017.
-The CuminReporting feature can be activated by running the cumin-report program from $CUMIN_HOME/bin directory. cumin-report pulls data from the ODS (mongoDB) in the background. A full data load can be millions and millions of records and could take a considerable amount of time. Any data that has been loaded will immediately show up in the charts in the cumin UI.
+This feature allows Cumin to use data from the condor-plumage (ODS) database to generate long-duration visualizations of grid system behavior.
+In order for cumin-report to be able to access the condor-plumage ODS, the Grid installation must include the condor-plumage package and it must be turned on.
+The only configuration necessary to enable CuminReporting is in /etc/cumin/cumin.conf
+Find the section that looks like the commented lines below and uncomment the "# reports: report" line
+# Reporting is off by default.
+# To enable reporting features, uncomment the following line.
+# reports: report
+
+cumin-report pulls data from the ODS (mongoDB) in the background. A full data load can be millions and millions of records and could take a considerable amount of time.
+Any data that has been loaded will immediately show up in the charts in the cumin UI. There are threads that maintain an archive load (starting with current records and moving back in time) and another thread that loads current records every 5 min.
+
Dependencies:
The CuminReporting feature has a dependency on
12 years, 1 month
r5320 - trunk/mint/python/mint/plumage
by croberts@fedoraproject.org
Author: croberts
Date: 2012-04-19 19:18:41 +0000 (Thu, 19 Apr 2012)
New Revision: 5320
Added:
trunk/mint/python/mint/plumage/REPORTING-README
Log:
Including a REPORTING-README file to include the details required to get reporting (TP) up and running.
Added: trunk/mint/python/mint/plumage/REPORTING-README
===================================================================
--- trunk/mint/python/mint/plumage/REPORTING-README (rev 0)
+++ trunk/mint/python/mint/plumage/REPORTING-README 2012-04-19 19:18:41 UTC (rev 5320)
@@ -0,0 +1,29 @@
+Technology Preview feature CuminReporting
+
+Description:
+
+This feature allows Cumin to use data from the Plumage (ODS) database to generate long-duration visualizations of grid system behavior.
+In order for cumin-report to be able to access the Plumage ODS, the Grid installation must include the Plumage plug-in and it must be turned on. The only configuration necessary to access it can be made by changing the plumage_host and plumage_port configuration parameters in /etc/cumin/cumin.conf under the [report] section of the configuration. The default values are plumage_host: localhost plumage_port: 27017.
+The CuminReporting feature can be activated by running the cumin-report program from $CUMIN_HOME/bin directory. cumin-report pulls data from the ODS (mongoDB) in the background. A full data load can be millions and millions of records and could take a considerable amount of time. Any data that has been loaded will immediately show up in the charts in the cumin UI.
+
+Dependencies:
+
+The CuminReporting feature has a dependency on
+RHEL 6 or newer.
+pymongo version 1.9-8 or newer. To install you can run the following: # yum install pymongo
+
+Feedback:
+
+Bug reports or requests for enhancement can be made through http://bugzilla.redhat.com. General questions about this feature can be handled through the email list
+cumin-users(a)lists.fedorahosted.org
+
+Full support:
+
+This feature is intended to be fully supported in an upcoming minor release.
+Where to find this information:
+Content similar to this Release Note may be found in the file /usr/share/doc/cumin-*/REPORTING-README after the software is installed. However, the Release Note should be considered more up to date and where there are any discrepancies the Release Note supersedes the readme file.
+
+Technology Preview Policy:
+Technology Preview features are not currently supported under Red Hat Enterprise Linux subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the technologies with wider exposure.
+Customers may find these features useful in non-production environments, and can provide feedback and functionality suggestions prior to their transition to fully supported status. Erratas will be provided for high-priority security issues.
+During its development additional components of a Technology Preview feature may become available to the public for testing. It is the intention of Red Hat to fully support Technology Preview features in a future release.
\ No newline at end of file
12 years, 1 month
r5319 - trunk/cumin
by tmckay@fedoraproject.org
Author: tmckay
Date: 2012-04-19 18:39:31 +0000 (Thu, 19 Apr 2012)
New Revision: 5319
Modified:
trunk/cumin/Makefile
Log:
Remove installation of AVIARY-README
Modified: trunk/cumin/Makefile
===================================================================
--- trunk/cumin/Makefile 2012-04-19 18:38:32 UTC (rev 5318)
+++ trunk/cumin/Makefile 2012-04-19 18:39:31 UTC (rev 5319)
@@ -23,7 +23,6 @@
install -pm 0755 bin/cumin bin/cumin-* ${CUMIN_HOME}/bin
install -d ${CUMIN_HOME}/doc
install -pm 0644 LICENSE COPYING ${CUMIN_HOME}/doc
- install -pm 0644 ../sage/python/sage/aviary/AVIARY-README ${CUMIN_HOME}/doc
install -pm 0644 ../wooly/LICENSE-for-wsgiserver ${CUMIN_HOME}/doc
install -pm 0644 ../wooly/COPYING-for-wsgiserver ${CUMIN_HOME}/doc
install -d ${CUMIN_HOME}/model/upgrades
12 years, 1 month
r5318 - trunk/sage/python/sage/aviary
by tmckay@fedoraproject.org
Author: tmckay
Date: 2012-04-19 18:38:32 +0000 (Thu, 19 Apr 2012)
New Revision: 5318
Removed:
trunk/sage/python/sage/aviary/AVIARY-README
Log:
Remove AVIARY-README
Deleted: trunk/sage/python/sage/aviary/AVIARY-README
===================================================================
--- trunk/sage/python/sage/aviary/AVIARY-README 2012-04-19 18:09:55 UTC (rev 5317)
+++ trunk/sage/python/sage/aviary/AVIARY-README 2012-04-19 18:38:32 UTC (rev 5318)
@@ -1,105 +0,0 @@
-Technology Preview feature CuminAviary
-
-Description:
-
-This feature allows Cumin to use the Aviary web services provided in the
-condor-aviary package for certain functions in the user interface. If the
-CuminAviary feature is enabled, Cumin will use Aviary services rather than QMF
-method calls where possible.
-
-The CuminAviary feature is controlled through the cumin configuration
-file. Relevant configuration parameters with descriptive comments can be found
-in the default /etc/cumin/cumin.conf file by searching for a line containing
-"Aviary interface to condor".
-
-Aviary provides a job service and a query service; Cumin may use either, both or
-neither. By default, Cumin will use QMF methods rather than Aviary services.
-
-To enable use of the Aviary job service, the 'aviary-job-servers' parameter must
-be uncommented and set (see the comments in the configuration file). Setting this
-parameter will cause Cumin to use the Aviary job service for job submission,
-for the hold, release, and remove job control functions, and for editing of
-job ad attributes.
-
-To enable use of the Aviary query service, the 'aviary-query-servers' parameter
-must be uncommented and set (see the comments in the configuration file). Setting
-this parameter will cause Cumin to use the Aviary query service for retrieving
-job output files, retrieving job ad details, and retreiving the list of jobs in a
-submission.
-
-Cumin will make INFO level entries in the log file for cumin-web that indicate
-whether use of the job and/or query services has been enabled and what type of
-certificate validation will be used for servers configured for SSL (see below).
-These log entries will begin with "AviaryOperations:" or contain the string
-"Aviary" somewhere in the message. If an Aviary operation fails, the yellow
-task banner associated with the operation will contain error information.
-
-By default, the Aviary services in condor will not use SSL (Secure Socket Layer)
-for communication and no other configuration parameters need to be set for
-this feature. However, if the Aviary services in condor have been configured to
-use SSL then additional configuration parameters must be set.
-
-First, note that the scheme for Aviary servers will change from "http" to "https"
-for any server using SSL. Failure to specify schemes correctly in the
-'aviary-job-servers' or 'aviary-query-servers' parameters will prevent the
-CuminAviary feature from functioning.
-
-Second, the 'aviary-key' and 'aviary-cert' parameters must be set. These
-parameters give the full paths to a PEM formated private key file and PEM
-formatted certificate file that Cumin will use as a client to access the Aviary
-services. The Aviary servers will validate Cumin's client certificate and allow
-access if validation succeeds.
-
-Optionally, the 'aviary-root-cert' parameter may be set. This is the full path
-to a PEM formatted file containing CA (certificate authority) certificates that
-Cumin will use to validate the server certificate. If this parameter is unset
-Cumin will NOT validate server certificates.
-
-Lastly, the 'aviary-domain-verify' parameter controls whether or not Cumin checks
-the hostname of the server against the server certificate during validation.
-This parameter has no effect unless 'aviary-root-cert' is set. The default value
-is True; it may be useful to set this parameter to False if the server is using a
-self-signed certificate with a non-matching hostname.
-
-Cumin will provide server certificate validation using the Python ssl standard
-language module if available or M2Crypto otherwise. If neither of these
-components are available, server certificate validation will be disabled.
-
-Dependencies:
-
-The CuminAviary feature has a dependency on python-suds-0.4.1 or newer. To date,
-this dependency is not enforced by the Cumin rpm. On a system without python-suds
-installed, Cumin will install and run but the Aviary interface will be disabled.
-If the CuminAviary feature is turned on in cumin.conf, an entry will be made in
-the log for cumin-web noting that Aviary has been disabled because of failed
-imports and Cumin will continue.
-
-Feedback:
-
-Bug reports or requests for enhancement can be made through
-http://bugzilla.redhat.com. General questions about this feature can be handled
-through the email list cumin-users(a)lists.fedorahosted.org
-
-Full support:
-
-This feature is intended to be fully supported in an upcoming minor release.
-
-Where to find this information:
-
-The content given here may be found in the Release Notes
-or in the file /usr/share/doc/cumin-*/AVIARY-README after installation.
-
-Technology Preview Policy:
-
-Technology Preview features are not currently supported under Red Hat Enterprise
-Linux subscription services, may not be functionally complete, and are generally
-not suitable for production use. However, these features are included as a
-customer convenience and to provide the technologies with wider exposure.
-
-Customers may find these features useful in non-production environments, and can
-provide feedback and functionality suggestions prior to their transition to fully
-supported status. Erratas will be provided for high-priority security issues.
-
-During its development additional components of a Technology Preview feature may
-become available to the public for testing. It is the intention of Red Hat to
-fully support Technology Preview features in a future release.
12 years, 1 month
r5316 - in trunk/cumin: bin python/cumin python/cumin/account
by tmckay@fedoraproject.org
Author: tmckay
Date: 2012-04-19 17:48:29 +0000 (Thu, 19 Apr 2012)
New Revision: 5316
Modified:
trunk/cumin/bin/cumin-admin
trunk/cumin/python/cumin/account/widgets.py
trunk/cumin/python/cumin/authenticator.py
trunk/cumin/python/cumin/widgets.py
Log:
Modify external user treatment so that the cumin database may be sparse,
easing maintenance of external user accounts.
Cumin will look for users externally if they are not defined in the
local database, and will assume a role of 'user' for such a user.
External users with roles other than 'user' may still be defined in the local
database with the cumin-admin external-user command.
Bulk import (external-sync) has been disabled because it is not necessary.
BZ737979
Modified: trunk/cumin/bin/cumin-admin
===================================================================
--- trunk/cumin/bin/cumin-admin 2012-04-19 17:08:41 UTC (rev 5315)
+++ trunk/cumin/bin/cumin-admin 2012-04-19 17:48:29 UTC (rev 5316)
@@ -125,9 +125,11 @@
lines.append("User commands:")
lines.append("")
lines.append(" add-user USER [PASSWORD] Add USER")
- lines.append(" external-user USER Add USER with external authentication")
- lines.append(" external-sync AUTH [VERBOSE] Batch import users with external authentication")
- lines.append(" AUTH is the name of a mechanism like 'ldap' or 'script'")
+ lines.append(" external-user USER Add USER with external authentication.")
+ lines.append(" Use this command when a role other than 'user'")
+ lines.append(" will be set for this external user. Otherwise,")
+ lines.append(" this command is not necessary.")
+ lines.append("")
lines.append(" remove-user USER Remove USER")
lines.append(" add-assignment USER ROLE Add USER to ROLE")
lines.append(" remove-assignment USER ROLE Remove USER from ROLE")
@@ -368,6 +370,9 @@
print "External user '%s' is added" % name
def handle_external_sync(app, cursor, opts, args):
+ print "This command is not supported at this time."
+ return
+
try:
authenticatorname = args[0]
except IndexError:
Modified: trunk/cumin/python/cumin/account/widgets.py
===================================================================
--- trunk/cumin/python/cumin/account/widgets.py 2012-04-19 17:08:41 UTC (rev 5315)
+++ trunk/cumin/python/cumin/account/widgets.py 2012-04-19 17:48:29 UTC (rev 5316)
@@ -118,31 +118,34 @@
self.validate(session)
if not self.errors.get(session):
- cursor = self.app.database.get_read_cursor()
-
- cls = self.app.model.com_redhat_cumin.User
- user = cls.get_object(cursor, name=name)
-
- if not user:
- self.login_invalid.set(session, "credentials")
- return
-
- authenticated = self.app.authenticator.authenticate(name,password)
- if authenticated:
+ user, ok = self.app.authenticator.authenticate(name, password)
+ if ok:
# You're in! almost...
# Check for a valid group if group authorization is on
+ roles = []
if self.app.authorizator.is_enforcing():
- roles = self.app.admin.get_roles_for_user(cursor, user)
- if not self.app.authorizator.contains_valid_group(roles):
+ if user is None:
+ # This was an external user with no role
+ # entry in the Cumin database. We default the
+ # role to 'user'. In the future we may store
+ # roles externally as well.
+ roles = ['user']
+ else:
+ cursor = self.app.database.get_read_cursor()
+ roles = self.app.admin.get_roles_for_user(
+ cursor, user)
+ ok = self.app.authorizator.contains_valid_group(roles)
+ if not ok:
self.login_invalid.set(session, "roles")
- return
- else:
- roles = []
- login = LoginSession(self.app, user, roles)
- session.client_session.attributes["login_session"] = login
- url = self.page.origin.get(session)
- self.page.redirect.set(session, url)
+ # If we're still okay, set up the login
+ if ok:
+ if user is None:
+ user = name
+ login = LoginSession(self.app, user, roles)
+ session.client_session.attributes["login_session"] = login
+ url = self.page.origin.get(session)
+ self.page.redirect.set(session, url)
else:
self.login_invalid.set(session, "credentials")
@@ -188,7 +191,8 @@
user = session.client_session.attributes["login_session"].user
# In case a different login session for this user has made
# changes, refresh the user object
- user.load(session.cursor)
+ if hasattr(user, "load"):
+ user.load(session.cursor)
new0 = self.new0.get(session)
new1 = self.new1.get(session)
Modified: trunk/cumin/python/cumin/authenticator.py
===================================================================
--- trunk/cumin/python/cumin/authenticator.py 2012-04-19 17:08:41 UTC (rev 5315)
+++ trunk/cumin/python/cumin/authenticator.py 2012-04-19 17:48:29 UTC (rev 5316)
@@ -51,17 +51,24 @@
return False
return conn
- def change_pw(self,user,oldpass,newpass):
+ def _find_user(self, user):
+ res = False
conn = self.__prep_con()
- if not conn:
- return False
- filter = self.url.filterstr % user
- try:
- res = conn.search_s(self.url.dn, self.url.scope, filter)
- except Exception, e:
- log.error("Authenticator: update password, "\
- "query returned exception %s", e)
- res = []
+ if conn:
+ filter = self.url.filterstr % user
+ try:
+ res = conn.search_s(self.url.dn, self.url.scope, filter)
+ except Exception, e:
+ log.error("Authenticator: find_user, "\
+ "query returned exception %s", e)
+ return conn, res
+
+ def find_user(self, user):
+ conn, res = self._find_user(user)
+ return res is not False
+
+ def change_pw(self, user, oldpass, newpass):
+ conn, res = self._find_user(user)
if not res:
log.info("Authenticator: update password, "\
"query returned no results in %s", self.__class__.__name__)
@@ -72,8 +79,8 @@
log.info("Authenticator: update password succeeded in %s",
self.__class__.__name__)
_syslog.log("cumin: updated password via LDAP for user %s" % dn)
- return True
- return False
+ res = True
+ return res
def batch_import(self):
log.debug("Authenticator: batch import in %s", self.__class__.__name__)
@@ -100,16 +107,7 @@
def authenticate(self, username, password):
log.debug("Authenticating against %s", self.__class__.__name__)
- conn = self.__prep_con()
- if not conn:
- return False
- filter = self.url.filterstr % username
- try:
- res = conn.search_s(self.url.dn, self.url.scope, filter)
- except Exception, e:
- log.error("Authenticator: authenticate, "\
- "query returned exception %s", e)
- res = []
+ conn, res = self._find_user(username)
if not res:
log.info("Authenticator: authentication failed, "\
"query returned no results in %s", self.__class__.__name__)
@@ -160,6 +158,9 @@
if self.params.has_key('list'):
log.debug("Authenticator: adding list capability to %s", me)
self.cap.append('l')
+ if self.params.has_key('find'):
+ log.debug("Authenticator: adding find capability to %s", me)
+ self.cap.append('f')
else:
log.error('Authenticator: script parameter missing or target file '\
'not executable in %s', me)
@@ -167,8 +168,7 @@
def authenticate(self, username, password):
me = self.__class__.__name__
if 'a' in self.cap:
- log.debug("Authenticator: executing %s in %s",
- self.params['script'], me)
+ log.debug("Authenticator: authenticate in %s" % me)
# Use nameless pipes to pass parameters.
# If they are passed as arguments they show up in the output
# of ps, etc. This way is secure.
@@ -185,6 +185,24 @@
_syslog.log("cumin: authentication via script failed for user %s" % username)
return False
+ def find_user(self, username):
+ me = self.__class__.__name__
+ if 'f' in self.cap:
+ log.debug("Authenticator: executing find_user in %s" % me)
+ # Use nameless pipes to pass parameters.
+ # If they are passed as arguments they show up in the output
+ # of ps, etc. This way is secure.
+ # Note, the call to communicate causes an EOF on stdin
+ cmd = [self.params['script']]
+ res = subprocess.Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
+ out, err = res.communicate(input="%s %s" % \
+ (self.params['find'], username))
+ log.debug("Authenticator: find_user, script returned "\
+ "%s, %s, exitcode %s", out, err, res.returncode)
+ if res.returncode == 0:
+ return True
+ return False
+
def batch_import(self):
if 'l' in self.cap:
log.debug("Authenticator: batch import in %s",
@@ -204,7 +222,7 @@
def change_pw(self, user, newpass, oldpass):
if 'c' in self.cap:
me = self.__class__.__name__
- log.debug("Authenticator: change password in %s", me)
+ log.debug("Authenticator: change password in %s" % me)
# Use nameless pipes to pass parameters.
# If they are passed as arguments they show up in the output
@@ -309,22 +327,37 @@
cursor = self.app.database.get_read_cursor()
cls = self.app.model.com_redhat_cumin.User
user = cls.get_object(cursor, name=username)
- external = len(user.password) == 0
- log.info("Authenticator: authenticating external user %s" % external)
- if not external:
+ if user is None or len(user.password) == 0:
+ log.debug("Authenticator: authenticating external user")
+ for authenticator in self.authenticators:
+ log.debug("Authenticator: authenticate, try %s", authenticator)
+ res = authenticator.authenticate(username, password)
+ if res:
+ break
+ else:
res = crypt(password, user.password) == user.password
- if res:
- msg = "cumin: authenticated user %s" % username
- else:
- msg = "cumin: authentication failed for user %s" % username
+ msg = "cumin: authentication %s for user %s" \
+ % (res and "succeeded" or "failed", username)
_syslog.log(msg)
- return res
- else:
+ return user, res
+
+ def find_user(self, username):
+ # Like authenticate, but find_user only confirms the
+ # existence of a user. We are not concerned with password here.
+ log.debug("Authenticator: calling find_user")
+ cursor = self.app.database.get_read_cursor()
+ cls = self.app.model.com_redhat_cumin.User
+ user = cls.get_object(cursor, name=username)
+ res = user and len(user.password) > 0
+ if not res:
+ # Try to find the user externally
+ log.debug("Authenticator: finding external user")
for authenticator in self.authenticators:
- log.debug("Authenticator: authenticate, try %s", authenticator)
- if authenticator.authenticate(username, password):
- return True
- return False
+ log.debug("Authenticator: find_user, try %s", authenticator)
+ res = authenticator.find_user(username)
+ if res:
+ break
+ return user, res
def update_password(self, username, oldpassword, newpassword):
status = False
@@ -335,18 +368,21 @@
cursor = conn.cursor()
cls = self.app.model.com_redhat_cumin.User
user = cls.get_object(cursor, name=username)
- if user.password == "":
- for authenticator in self.authenticators:
- status = authenticator.authenticate(username, oldpassword)
- if status:
- status = authenticator.change_pw(username,
- oldpassword, newpassword)
- if not status:
- message = "Could not change password"
- break
+ if user is None or user.password == "":
+ # Disable change password for external users
+ message = "Change password is not allowed for external users"
- if not status and message == "":
- message = "The password is incorrect"
+ #for authenticator in self.authenticators:
+ # status = authenticator.authenticate(username, oldpassword)
+ # if status:
+ # status = authenticator.change_pw(username,
+ # oldpassword, newpassword)
+ # if not status:
+ # message = "Could not change password"
+ # break
+ #if not status and message == "":
+ # message = "The password is incorrect"
+
else:
status = crypt(oldpassword, user.password) == user.password
if status:
Modified: trunk/cumin/python/cumin/widgets.py
===================================================================
--- trunk/cumin/python/cumin/widgets.py 2012-04-19 17:08:41 UTC (rev 5315)
+++ trunk/cumin/python/cumin/widgets.py 2012-04-19 17:48:29 UTC (rev 5316)
@@ -1171,11 +1171,23 @@
class LoginSession(object):
def __init__(self, app, user, group):
self.app = app
- self.user = user
+
+ # If this is an external user, create
+ # an adapter here. Lots of things
+ # expect user.name to be defined.
+ if type(user) is str:
+ self.user = LoginSession.ExternalUser()
+ self.user.name = user
+ else:
+ self.user = user
+
self.group = group
self.created = datetime.now()
self.notifications = list()
+ class ExternalUser(object):
+ pass
+
class NotificationSet(Widget):
def __init__(self, app, name):
super(NotificationSet, self).__init__(app, name)
@@ -1309,24 +1321,25 @@
elif self.app.auth_proxy or self.app.user:
username = self.app.user
# proxy user overrides app defined user
- try:
- username = session.request_environment['HTTP_REMOTE_USER']
- except KeyError:
- log.debug("Proxy auth enabled but no remote user set")
- pass
+ if self.app.auth_proxy:
+ try:
+ username = session.request_environment['HTTP_REMOTE_USER']
+ except KeyError:
+ log.debug("Proxy auth enabled but no remote user set")
if username:
- cls = self.app.model.com_redhat_cumin.User
- users = cls.get_selection(session.cursor, name=username)
- if not users:
+ user, ok = self.app.authenticator.find_user(username)
+ if not ok:
+ # We couldn't find the user, internally or externally
if not self.app.auth_proxy:
log.info("User '%s' not found" % username)
+ return False
else:
log.info("User %s not found in db, "\
"using auth proxy", username )
#Hmmm, what is users here? For now just let it return
# to the login page
- return False
+ return False
#TODO prehodit do authenticatora
# ondemand -> authenticator.create_user
@@ -1334,15 +1347,22 @@
# Check for valid group
if self.app.authorizator.is_enforcing():
- cursor = self.app.database.get_read_cursor()
- roles = self.app.admin.get_roles_for_user(cursor, users[0])
+ if not user:
+ # This is an external user, default role is 'user'
+ roles = ['user']
+ else:
+ cursor = self.app.database.get_read_cursor()
+ roles = self.app.admin.get_roles_for_user(
+ cursor, user)
if not self.app.authorizator.contains_valid_group(roles):
log.info("No valid roles for '%s'" % username)
return False # go to login page
else:
roles = []
- login = LoginSession(self.app, users[0], roles)
+ if user is None:
+ user = username
+ login = LoginSession(self.app, user, roles)
session.client_session.attributes["login_session"] = login
return True
return False
12 years, 1 month
r5315 - in trunk: cumin/bin cumin/etc cumin/python/cumin sage/python/sage/aviary
by tmckay@fedoraproject.org
Author: tmckay
Date: 2012-04-19 17:08:41 +0000 (Thu, 19 Apr 2012)
New Revision: 5315
Modified:
trunk/cumin/bin/cumin-web
trunk/cumin/etc/cumin.conf
trunk/cumin/python/cumin/config.py
trunk/cumin/python/cumin/main.py
trunk/sage/python/sage/aviary/aviaryoperations.py
Log:
Make Cumin defaults and options align with condor-aviary defaults and options.
Aviary interface in Cumin will be functional ootb using well-known aviary
server endpoints, locator use will be off.
BZ733516
Modified: trunk/cumin/bin/cumin-web
===================================================================
--- trunk/cumin/bin/cumin-web 2012-04-19 15:24:19 UTC (rev 5314)
+++ trunk/cumin/bin/cumin-web 2012-04-19 17:08:41 UTC (rev 5315)
@@ -18,6 +18,8 @@
sys.stdout = sys.__stdout__
def set_aviary_configs(cumin, values):
+ cumin.aviary_job_servers = values.aviary_job_servers
+ cumin.aviary_query_servers = values.aviary_query_servers
cumin.aviary_locator = values.aviary_locator
cumin.aviary_key = values.aviary_key
cumin.aviary_cert = values.aviary_cert
Modified: trunk/cumin/etc/cumin.conf
===================================================================
--- trunk/cumin/etc/cumin.conf 2012-04-19 15:24:19 UTC (rev 5314)
+++ trunk/cumin/etc/cumin.conf 2012-04-19 17:08:41 UTC (rev 5315)
@@ -24,18 +24,26 @@
# ****************************************************
# Aviary interface to condor
-# Default value for each of the following configuration
-# parameters is empty string unless otherwise specified.
-# Empty string means that no value is specified.
+# The value for this parameter is a comma separated list of URLs for Aviary
+# job servers. If the Aviary locator is used, this value will be overriden
+# but must still be non-empty to enable use of Aviary job servers.
+# Default value is shown. Uncomment and leave the value blank to disable.
+# aviary-job-servers: http://localhost:9090
-# The URL for the aviary locator service. This service allows Cumin to retrieve
-# endpoints for other aviary services. Scheme, port, and path have defaults of
-# http, 9000, and services/locator/locate respectively. If aviary-locator is
-# using ssl communication then at a minimum the value must be https://localhost.
-# Setting aviary-locator to "" (no value after the colon) will turn off use of
-# aviary in Cumin.
-#aviary-locator: localhost
+# The value for this parameter is a comma separated list of URLs for Aviary
+# query servers. If the Aviary locator is used, this value will be overriden
+# but must still be non-empty to enable use of Aviary query servers.
+# Default value is shown. Uncomment and leave the value blank to disable.
+# aviary-query-servers: http://localhost:9091
+# The locator allows Cumin to retrive values for Aviary job servers and
+# Aviary query servers automatically. If the Aviary locator is enabled the
+# values for aviary-job-servers and aviary-query-servers will be overriden
+# (but those parameters must still be non-empty to be enabled).
+# Default is empty string (aviary locator will not be used). Uncomment the
+# following line and edit as needed to enable.
+# aviary-locator: http://localhost:9000
+
# Full path to private key file used for ssl communication with aviary servers.
# This is necessary to communicate with any aviary server using the https scheme.
#aviary-key:
@@ -203,6 +211,30 @@
## How often in seconds to contact the Wallaby agent
## for updated information.
+## aviary-job-servers: http://localhost:9090
+## Specifies the URIs for aviary job servers. The value
+## is a comma separated list of URIs. A full URI has the
+## form 'scheme://user/password@host:port/path". The scheme
+## will default to http if not specified, the port will
+## default to 9090, and the path will default to
+## /services/job/. User and password will be empty by
+## default. As a convenience, a URI that explicitly
+## sets a port number may be followed by one or more
+## port numbers separated by commas to specify
+## mulitple job servers whose URIs differ only
+## by port number. This parameter must be non-empty
+## in order for aviary job servers to be used, even
+## if aviary-locator has been set.
+
+## aviary-query-servers: http://localhost:9091
+## Like aviary-job-servers but specifies URIs for aviary
+## query servers. The port value defaults to 9091 and the
+## path defaults to /services/query/. Other
+## defaults are as noted for aviary-job-servers.
+## This parameter must be non-empty in order for aviary
+## query servers to be used, even if aviary-locator
+## has been set.
+
## log-max-mb: 10
## Maximum size in MB of *.log files created by cumin.
## A log file reaching maximum size will be rolled over.
Modified: trunk/cumin/python/cumin/config.py
===================================================================
--- trunk/cumin/python/cumin/config.py 2012-04-19 15:24:19 UTC (rev 5314)
+++ trunk/cumin/python/cumin/config.py 2012-04-19 17:08:41 UTC (rev 5315)
@@ -187,8 +187,14 @@
param = ConfigParameter(self, "wallaby-refresh", int)
param.default = 60
+ param = ConfigParameter(self, "aviary-job-servers", str)
+ param.default = "http://localhost:9090"
+
+ param = ConfigParameter(self, "aviary-query-servers", str)
+ param.default = "http://localhost:9091"
+
param = ConfigParameter(self, "aviary-locator", str)
- param.default = "localhost"
+ param.default = ""
param = ConfigParameter(self, "aviary-key", str)
param.default = ""
Modified: trunk/cumin/python/cumin/main.py
===================================================================
--- trunk/cumin/python/cumin/main.py 2012-04-19 15:24:19 UTC (rev 5314)
+++ trunk/cumin/python/cumin/main.py 2012-04-19 17:08:41 UTC (rev 5315)
@@ -90,7 +90,10 @@
# mechanisms, according to the sasl documentation
self.sasl_mech_list = None
- # Aviary interface. If locator is "" the service will be disabled.
+ # Aviary interface. If server values are "",
+ # Aviary operations for that server type will not be used.
+ self.aviary_job_servers = ""
+ self.aviary_query_servers = ""
self.aviary_key = ""
self.aviary_cert = ""
self.aviary_root_cert = ""
@@ -185,7 +188,7 @@
ops = [QmfOperations("qmf", self.session)]
imports_ok = True
- if self.aviary_locator:
+ if self.aviary_job_servers or self.aviary_query_servers:
try:
from sage.aviary.aviaryoperations import \
SudsLogging, AviaryOperationsFactory
@@ -196,9 +199,13 @@
aviary_dir = os.path.join(self.home, "rpc-defs/aviary")
# The factory will choose an impl that gives us jobs, queries, or both
- # At present, selecting both is hardwired
+ # depending on whether job_servers and query_servers are empty strings.
+ # If locator is non empty, their actual values will be overridden
+ # but the presence of a value will stole control enable/disable.
aviary_itf = AviaryOperationsFactory("aviary", aviary_dir,
self.aviary_locator,
+ self.aviary_job_servers,
+ self.aviary_query_servers,
key=self.aviary_key,
cert=self.aviary_cert,
root_cert=self.aviary_root_cert,
@@ -206,14 +213,18 @@
ops.insert(0, aviary_itf)
else:
log.info("Imports failed for Aviary interface, disabling")
- else:
- log.info("No Aviary locator specified.")
+ log.info("%s Aviary locator interface" % \
+ ((self.aviary_locator and \
+ (self.aviary_job_servers or \
+ self.aviary_query_servers) and \
+ imports_ok) and "Enabled" or "Disabled"))
+
log.info("%s Aviary interface for job submission and control." % \
- ((self.aviary_locator and imports_ok) and "Enabled" or "Disabled"))
+ ((self.aviary_job_servers and imports_ok) and "Enabled" or "Disabled"))
log.info("%s Aviary interface for query operations." % \
- ((self.aviary_locator and imports_ok) and "Enabled" or "Disabled"))
+ ((self.aviary_query_servers and imports_ok) and "Enabled" or "Disabled"))
self.remote.add_mechanisms(ops)
Modified: trunk/sage/python/sage/aviary/aviaryoperations.py
===================================================================
--- trunk/sage/python/sage/aviary/aviaryoperations.py 2012-04-19 15:24:19 UTC (rev 5314)
+++ trunk/sage/python/sage/aviary/aviaryoperations.py 2012-04-19 17:08:41 UTC (rev 5315)
@@ -6,6 +6,7 @@
import string
import time
import sage
+import socket
from datetime import datetime
from threading import Lock
@@ -60,6 +61,29 @@
#the clients that we pool so that we can call set_options on the
#transport.
+def _get_host(name, servers):
+ '''
+ Lookup a host in a dictionary produced by sage.util.host_list.
+ Return the scheme and URL for the host.
+ '''
+ scheme = ""
+ host = ""
+ if name in servers:
+ urls = servers[name]
+ if len(urls) > 0:
+ url = random.sample(urls, 1)[0]
+ scheme = url.scheme
+ host = str(url)
+ # A particular method name is going to be appended to path,
+ # so ensure the final "/" here.
+ if not host.endswith("/"):
+ host += "/"
+ return scheme, host
+
+# Nice, friendly strings for error messages on lookup
+_nice = {"JOB": "job service",
+ "QUERY_SERVER": "query service"}
+
class ServerList(object):
'''
Query an Aviary locator object for endpoints by reource and subtype.
@@ -68,42 +92,24 @@
'''
def __init__(self, locator, resource, subtype):
+ # Since we have a dynamic server list, failed operations
+ # may be retried
+ self.should_retry = True
+
self._lock = Lock()
self.servers = None
self.locator = locator
self.resource = resource
self.subtype = subtype
- # Nice, friendly strings for error messages on lookup
- nice = {"JOB": "job service",
- "QUERY_SERVER": "query service"}
try:
- self.nice = nice[subtype]
+ self.nice = _nice[subtype]
except:
self.nice = subtype
- def _get_host(self, name, servers):
- '''
- Lookup a host in a dictionary produced by sage.util.host_list.
- Return the scheme and URL for the host.
- '''
- scheme = ""
- host = ""
- if name in servers:
- urls = servers[name]
- if len(urls) > 0:
- url = random.sample(urls, 1)[0]
- scheme = url.scheme
- host = str(url)
- # A particular method name is going to be appended to path,
- # so ensure the final "/" here.
- if not host.endswith("/"):
- host += "/"
- return scheme, host
-
def _find_server(self, machine, refresh=False):
'''
- Search the cached server list for machine using self._get_host.
+ Search the cached server list for machine using _get_host.
If the server list is empty or refresh is True, get a new list
of endpoints from the Aviary locator object and generate a new
server list.
@@ -140,7 +146,7 @@
# sage_URLs by machine
self.servers = host_list(urls)
- scheme, host = self._get_host(machine, self.servers)
+ scheme, host = _get_host(machine, self.servers)
finally:
self._lock.release()
return scheme, host
@@ -168,16 +174,56 @@
if host == "":
log.info("AviaryOperations: failed to locate %s on %s" \
% (self.nice, machine))
- raise Exception("Cannot locate %s on %s" % (self.nice, machine))
+ raise Exception("Cannot locate %s on %s via aviary locator" \
+ % (self.nice, machine))
return scheme, host
+class FixedServerList(object):
+ '''
+ Allows lookup for endpoints by machine when the server list is fixed.
+ '''
+ def __init__(self, servers, port, path, subtype):
+ # Fixed server list, there is no point to a retry on failed ops.
+ self.should_retry = False
+
+ # Replace any occurrence of locahost with output of gethostname()
+ # before parsing to match Machine fields of QMF objects later on.
+ host = socket.gethostname()
+ servers = string.replace(servers, "localhost", host)
+
+ self.servers = host_list(servers,
+ default_scheme="http",
+ default_port=port,
+ default_path=path)
+
+ try:
+ self.nice = _nice[subtype]
+ except:
+ self.nice = subtype
+
+ def find_server(self, machine, *args):
+ scheme, host = _get_host(machine, self.servers)
+ if host == "":
+ log.info("AviaryOperations: failed to locate %s on %s" \
+ % (self.nice, machine))
+ raise Exception("Cannot locate %s on %s, check aviary " \
+ "settings in cumin.conf" % (self.nice, machine))
+ return scheme, host
+
class _AviaryJobMethods(object):
# Do this here rather than __init__ so we don't have to worry about
# matching parameter lists in multiple inheritance cases with super
- def init(self, datadir):
+ def init(self, datadir, job_servers):
- self.job_servers = ServerList(self.locator, "SCHEDULER", "JOB")
+ if self.locator:
+ self.job_servers = ServerList(self.locator, "SCHEDULER", "JOB")
+ else:
+ self.job_servers = FixedServerList(job_servers,
+ "9090",
+ "/services/job/",
+ "JOB")
+
job_wsdl = "file:" + os.path.join(datadir, "aviary-job.wsdl")
self.job_client_pool = ClientPool(job_wsdl, None)
@@ -354,9 +400,17 @@
# Do this here rather than __init__ so we don't have to worry about
# matching parameter lists in multiple inheritance cases with super
- def init(self, datadir):
+ def init(self, datadir, query_servers):
- self.query_servers = ServerList(self.locator, "CUSTOM", "QUERY_SERVER")
+ if self.locator:
+ self.query_servers = ServerList(self.locator,
+ "CUSTOM", "QUERY_SERVER")
+ else:
+ self.query_servers = FixedServerList(query_servers,
+ "9091",
+ "/services/query/",
+ "QUERY_SERVER")
+
query_wsdl = "file:" + os.path.join(datadir, "aviary-query.wsdl")
self.query_client_pool = ClientPool(query_wsdl, None)
@@ -662,10 +716,13 @@
# (probably due to a restart on the condor side)
# Let's get new endpoints, reset the client,
# and try again.
- log.debug("AviaryOperations: received %s, retrying operation %s"\
- % (str(e), client.options.location))
- self._set_client_info(client, refresh=True)
- result = meth(*meth_args, **meth_kwargs)
+ if client.server_list.should_retry:
+ log.debug("AviaryOperations: received %s, retrying %s"\
+ % (str(e), client.options.location))
+ self._set_client_info(client, refresh=True)
+ result = meth(*meth_args, **meth_kwargs)
+ else:
+ raise e
return result
def _call_sync(self, process_results, meth, *meth_args, **meth_kwargs):
@@ -685,50 +742,58 @@
return sync
class AviaryOperations(_AviaryCommon, _AviaryJobMethods, _AviaryQueryMethods):
- def __init__(self, name, datadir, locator,
+ def __init__(self, name, datadir, locator, job_servers, query_servers,
key="", cert="", root_cert="", domain_verify=True):
super(AviaryOperations, self).__init__(name, locator,
key, cert, root_cert,
domain_verify)
- _AviaryJobMethods.init(self, datadir)
- _AviaryQueryMethods.init(self, datadir)
+ _AviaryJobMethods.init(self, datadir, job_servers)
+ _AviaryQueryMethods.init(self, datadir, query_servers)
class AviaryJobOperations(_AviaryCommon, _AviaryJobMethods):
- def __init__(self, name, datadir, locator,
+ def __init__(self, name, datadir, locator, job_servers,
key="", cert="", root_cert="", domain_verify=True):
super(AviaryJobOperations, self).__init__(name, locator,
key, cert, root_cert,
domain_verify)
- _AviaryJobMethods.init(self, datadir)
+ _AviaryJobMethods.init(self, datadir, job_servers)
class AviaryQueryOperations(_AviaryCommon, _AviaryQueryMethods):
- def __init__(self, name, datadir, locator,
+ def __init__(self, name, datadir, locator, query_servers,
key="", cert="", root_cert="", domain_verify=True):
super(AviaryQueryOperations, self).__init__(name, locator,
key, cert, root_cert,
domain_verify)
- _AviaryQueryMethods.init(self, datadir)
+ _AviaryQueryMethods.init(self, datadir, query_servers)
def AviaryOperationsFactory(name, datadir, locator_uri,
- job_server=True, query_server=True,
+ job_servers, query_servers,
key="", cert="", root_cert="", domain_verify=True):
- locator = AviaryLocator(datadir, locator_uri,
- key, cert, root_cert, domain_verify)
- if job_server and query_server:
+
+ # If locator uri has not been specified, it's disabled and we will
+ # use the specified job_servers and query_servers values
+ if locator_uri:
+ locator = AviaryLocator(datadir, locator_uri,
+ key, cert, root_cert, domain_verify)
+ else:
+ locator = None
+
+ if job_servers and query_servers:
res = AviaryOperations(name, datadir, locator,
+ job_servers, query_servers,
key, cert, root_cert, domain_verify)
- elif job_server:
- res = AviaryJobOperations(name, datadir, locator,
+ elif job_servers:
+ res = AviaryJobOperations(name, datadir, locator,job_servers,
key, cert, root_cert, domain_verify)
- elif query_server:
- res = AviaryQueryOperations(name, datadir, locator,
+ elif query_servers:
+ res = AviaryQueryOperations(name, datadir, locator,query_servers,
key, cert, root_cert, domain_verify)
return res
12 years, 1 month
r5314 - branches/play_plumage/cumin/resources
by croberts@fedoraproject.org
Author: croberts
Date: 2012-04-19 15:24:19 +0000 (Thu, 19 Apr 2012)
New Revision: 5314
Modified:
branches/play_plumage/cumin/resources/app.js
Log:
The dataContainer object for each chart now contains the duration of the chart.
Modified: branches/play_plumage/cumin/resources/app.js
===================================================================
--- branches/play_plumage/cumin/resources/app.js 2012-04-19 15:02:25 UTC (rev 5313)
+++ branches/play_plumage/cumin/resources/app.js 2012-04-19 15:24:19 UTC (rev 5314)
@@ -504,6 +504,7 @@
dataContainer['end_secs'] = json.end_secs;
dataContainer['tnow'] = json.tnow;
dataContainer['x_axis_values'] = [];
+ dataContainer['duration'] = json['duration'];
switch(json['duration']) {
case 600:
12 years, 1 month
r5313 - branches/play_plumage/mint/python/mint/plumage
by croberts@fedoraproject.org
Author: croberts
Date: 2012-04-19 15:02:25 +0000 (Thu, 19 Apr 2012)
New Revision: 5313
Modified:
branches/play_plumage/mint/python/mint/plumage/session.py
Log:
Adding log messages for clarity.
Modified: branches/play_plumage/mint/python/mint/plumage/session.py
===================================================================
--- branches/play_plumage/mint/python/mint/plumage/session.py 2012-04-18 20:00:15 UTC (rev 5312)
+++ branches/play_plumage/mint/python/mint/plumage/session.py 2012-04-19 15:02:25 UTC (rev 5313)
@@ -143,6 +143,8 @@
else:
most_recent = datetime.now() - timedelta(seconds=300) + UTC_DIFF
+ log.info("PlumageSessionThread--current: Loading records newer than %s" % most_recent)
+
itemSet = self.collection.find({"ts": {'$gt': most_recent}}).sort("ts", -1)
for item in itemSet:
record = OSUtil()
@@ -165,8 +167,8 @@
class CatchUpPlumageSessionThread(PlumageSessionThread):
def run(self):
(oldest, newest) = self.app.update_thread.get_first_and_last_sample_timestamp(self.cls)
-
if newest is not None:
+ log.info("CatchUpPlumageSessionThread: Starting for records newer than %s" % newest)
itemSet = self.collection.find({"ts": {'$gt': newest + UTC_DIFF, '$lt': datetime.now() - timedelta(seconds=300) + UTC_DIFF}}).sort("ts", -1)
for item in itemSet:
record = OSUtil()
12 years, 1 month
r5312 - branches/play_plumage/mint/python/mint/plumage
by croberts@fedoraproject.org
Author: croberts
Date: 2012-04-18 20:00:15 +0000 (Wed, 18 Apr 2012)
New Revision: 5312
Modified:
branches/play_plumage/mint/python/mint/plumage/session.py
branches/play_plumage/mint/python/mint/plumage/update.py
Log:
Improving cumin-report logic to eliminate duplicate records.
Modified: branches/play_plumage/mint/python/mint/plumage/session.py
===================================================================
--- branches/play_plumage/mint/python/mint/plumage/session.py 2012-04-18 15:19:09 UTC (rev 5311)
+++ branches/play_plumage/mint/python/mint/plumage/session.py 2012-04-18 20:00:15 UTC (rev 5312)
@@ -40,6 +40,11 @@
black_list.add(cls)
else:
# We're going to have a thread for this...
+ self.threads.append(CatchUpPlumageSessionThread(
+ self.app,
+ self.server_host,
+ self.server_port,
+ cls))
self.threads.append(PlumageSessionThread(
self.app,
self.server_host,
@@ -61,6 +66,11 @@
if pkg._name not in self.app.package_filter:
self.app.packages.add(pkg)
for cls in pkg._classes:
+ self.threads.append(CatchUpPlumageSessionThread(
+ self.app,
+ self.server_host,
+ self.server_port,
+ cls))
self.threads.append(PlumageSessionThread(
self.app,
self.server_host,
@@ -98,9 +108,14 @@
# We create objects here. Tag them with the right class,
# probably specified to us from a config option (with a corresponding
# query specification in the xml)
-
- itemSet = self.collection.find().sort("ts", -1)
-
+ (oldest, newest) = self.app.update_thread.get_first_and_last_sample_timestamp(self.cls)
+ if oldest is None:
+ # if we have no oldest record (first run), start at "5 min ago" and start loading everything
+ oldest = datetime.now() - timedelta(seconds=300)
+ oldest = oldest + UTC_DIFF
+
+ log.info("PlumageSessionThread--history: Loading records older than %s" % oldest)
+ itemSet = self.collection.find({"ts": {'$lt': oldest}}).sort("ts", -1)
for item in itemSet:
record = OSUtil()
for name, value in item.iteritems():
@@ -114,15 +129,21 @@
self.app.update_thread.enqueue(obj)
log.info("PlumageSessionThread--history: run completed")
-
+
+
class CurrentPlumageSessionThread(PlumageSessionThread):
def run(self):
while True:
if self.stop_requested:
break
- sleep(300)
- itemSet = self.collection.find({"ts": {'$gt': datetime.now() - timedelta(seconds=300) + UTC_DIFF}}).sort("ts", -1)
+ (oldest, newest) = self.app.update_thread.get_first_and_last_sample_timestamp(self.cls)
+ if newest is not None:
+ most_recent = max((datetime.now() - timedelta(seconds=300) + UTC_DIFF), (newest + UTC_DIFF))
+ else:
+ most_recent = datetime.now() - timedelta(seconds=300) + UTC_DIFF
+
+ itemSet = self.collection.find({"ts": {'$gt': most_recent}}).sort("ts", -1)
for item in itemSet:
record = OSUtil()
for name, value in item.iteritems():
@@ -134,3 +155,30 @@
obj = ObjectUpdate(self.app.model, record, self.cls)
self.app.update_thread.enqueue(obj)
log.info("PlumageSessionThread--current: pass completed %s records added" % itemSet.count())
+ sleep(300)
+
+
+'''This thread is meant to run once at startup. It will figure out the newest record in the cumin
+ database and then make a pass to load all records from that time forward (up to 5 min ago...those get
+ picked-up by the currency thread.
+'''
+class CatchUpPlumageSessionThread(PlumageSessionThread):
+ def run(self):
+ (oldest, newest) = self.app.update_thread.get_first_and_last_sample_timestamp(self.cls)
+
+ if newest is not None:
+ itemSet = self.collection.find({"ts": {'$gt': newest + UTC_DIFF, '$lt': datetime.now() - timedelta(seconds=300) + UTC_DIFF}}).sort("ts", -1)
+ for item in itemSet:
+ record = OSUtil()
+ for name, value in item.iteritems():
+ if isinstance(value, datetime):
+ #required since MongoDB uses UTC for everything, here we get our sanity back
+ setattr(record, name, value - UTC_DIFF)
+ else:
+ setattr(record, name, value)
+
+ obj = ObjectUpdate(self.app.model, record, self.cls)
+ self.app.update_thread.enqueue(obj)
+ log.info("CatchUpPlumageSessionThread--catch-up: catch-up run completed for %s records newer than %s and older than %s" % (itemSet.count(), newest, datetime.now() - timedelta(seconds=300) + UTC_DIFF))
+ else:
+ log.info("CatchUpPlumageSessionThread: Skipping catch-up, no records present (probably first-run)")
Modified: branches/play_plumage/mint/python/mint/plumage/update.py
===================================================================
--- branches/play_plumage/mint/python/mint/plumage/update.py 2012-04-18 15:19:09 UTC (rev 5311)
+++ branches/play_plumage/mint/python/mint/plumage/update.py 2012-04-18 20:00:15 UTC (rev 5312)
@@ -70,6 +70,21 @@
self.cursor = self.conn.cursor(cursor_factory=UpdateCursor)
self.cursor.stats = self.stats
+ def get_first_and_last_sample_timestamp(self, cls):
+ cursor = self.conn.cursor()
+ table = cls.sql_samples_table
+ oldest = None
+ newest = None
+ cursor.execute("select %s from %s order by %s asc limit 1" % (cls.timestamp, table.identifier, cls.timestamp))
+ if cursor.rowcount > 0:
+ oldest = cursor.fetchone()[0]
+
+ cursor.execute("select %s from %s order by %s desc limit 1" % (cls.timestamp, table.identifier, cls.timestamp))
+ if cursor.rowcount > 0:
+ newest = cursor.fetchone()[0]
+
+ return (oldest, newest)
+
def enqueue(self, update):
self.updates.put(update)
12 years, 1 month
r5311 - in trunk: cumin/bin cumin/etc cumin/python/cumin sage/python/sage/aviary
by tmckay@fedoraproject.org
Author: tmckay
Date: 2012-04-18 15:19:09 +0000 (Wed, 18 Apr 2012)
New Revision: 5311
Modified:
trunk/cumin/bin/cumin-web
trunk/cumin/etc/cumin.conf
trunk/cumin/python/cumin/config.py
trunk/cumin/python/cumin/main.py
trunk/sage/python/sage/aviary/aviarylocator.py
trunk/sage/python/sage/aviary/aviaryoperations.py
trunk/sage/python/sage/aviary/clients.py
Log:
Remove old configs for aviary-job-servers and aviary-query-servers
Default aviary-locator to locahost in cumin.conf
Improving logging and notices on exceptions.
Extend AviaryLocator to support https connections to locator
BZ733516
Modified: trunk/cumin/bin/cumin-web
===================================================================
--- trunk/cumin/bin/cumin-web 2012-04-17 21:22:17 UTC (rev 5310)
+++ trunk/cumin/bin/cumin-web 2012-04-18 15:19:09 UTC (rev 5311)
@@ -18,8 +18,6 @@
sys.stdout = sys.__stdout__
def set_aviary_configs(cumin, values):
-# cumin.aviary_job_servers = values.aviary_job_servers+
-# cumin.aviary_query_servers = values.aviary_query_servers
cumin.aviary_locator = values.aviary_locator
cumin.aviary_key = values.aviary_key
cumin.aviary_cert = values.aviary_cert
Modified: trunk/cumin/etc/cumin.conf
===================================================================
--- trunk/cumin/etc/cumin.conf 2012-04-17 21:22:17 UTC (rev 5310)
+++ trunk/cumin/etc/cumin.conf 2012-04-18 15:19:09 UTC (rev 5311)
@@ -28,16 +28,14 @@
# parameters is empty string unless otherwise specified.
# Empty string means that no value is specified.
-# To use aviary job servers, uncomment the following line and edit as needed.
-# The value is a comma separated list of URLs. The default port condor uses
-# for an Aviary job server is 9090.
-# aviary-job-servers: http://localhost:9090
+# The URL for the aviary locator service. This service allows Cumin to retrieve
+# endpoints for other aviary services. Scheme, port, and path have defaults of
+# http, 9000, and services/locator/locate respectively. If aviary-locator is
+# using ssl communication then at a minimum the value must be https://localhost.
+# Setting aviary-locator to "" (no value after the colon) will turn off use of
+# aviary in Cumin.
+#aviary-locator: localhost
-# To use aviary query servers, uncomment the following line and edit as needed.
-# The value is a comma separated list of URLs. The default port condor uses
-# for an Aviary query server is 9091.
-# aviary-query-servers: http://localhost:9091
-
# Full path to private key file used for ssl communication with aviary servers.
# This is necessary to communicate with any aviary server using the https scheme.
#aviary-key:
@@ -205,31 +203,6 @@
## How often in seconds to contact the Wallaby agent
## for updated information.
-## use-aviary: True
-## Whether or not to use the Aviary services for
-## remote procedure calls to condor. If this is
-## set to False, the QMF interface will be used
-## instead.
-
-## aviary-job-servers: http://localhost:9090
-## Specifies the URIs for aviary job servers. The value
-## is a comma separated list of URIs. A full URI has the
-## form 'scheme://user/password@host:port/path". The scheme
-## will default to http if not specified, the port will
-## default to 9090, and the path will default to
-## /services/job/. User and password will be empty by
-## default. As a convenience, a URI that explicitly
-## sets a port number may be followed by one or more
-## port numbers separated by commas to specify
-## mulitple job servers whose URIs differ only
-## by port number.
-
-## aviary-query-servers: http://localhost:9091
-## Like aviary-job-servers but specifies URIs for aviary
-## query servers. The port value defaults to 9091 and the
-## path defaults to /services/query/. Other
-## defaults are as noted for aviary-job-servers.
-
## log-max-mb: 10
## Maximum size in MB of *.log files created by cumin.
## A log file reaching maximum size will be rolled over.
Modified: trunk/cumin/python/cumin/config.py
===================================================================
--- trunk/cumin/python/cumin/config.py 2012-04-17 21:22:17 UTC (rev 5310)
+++ trunk/cumin/python/cumin/config.py 2012-04-18 15:19:09 UTC (rev 5311)
@@ -187,12 +187,6 @@
param = ConfigParameter(self, "wallaby-refresh", int)
param.default = 60
- param = ConfigParameter(self, "aviary-job-servers", str)
- param.default = ""
-
- param = ConfigParameter(self, "aviary-query-servers", str)
- param.default = ""
-
param = ConfigParameter(self, "aviary-locator", str)
param.default = "localhost"
Modified: trunk/cumin/python/cumin/main.py
===================================================================
--- trunk/cumin/python/cumin/main.py 2012-04-17 21:22:17 UTC (rev 5310)
+++ trunk/cumin/python/cumin/main.py 2012-04-18 15:19:09 UTC (rev 5311)
@@ -90,10 +90,7 @@
# mechanisms, according to the sasl documentation
self.sasl_mech_list = None
- # Aviary interface. If server values are "",
- # Aviary operations for that server type will not be used.
- #self.aviary_job_servers = ""
- #self.aviary_query_servers = ""
+ # Aviary interface. If locator is "" the service will be disabled.
self.aviary_key = ""
self.aviary_cert = ""
self.aviary_root_cert = ""
@@ -179,7 +176,7 @@
self.mainpage_cb = self.authorizator.find_mainpage
# Create RPC interfaces for QMF and aviary.
- # These service have overlapping functionality,
+ # These services have overlapping functionality,
# so they are wrapped in a sage.Catalog object
# which allows both to supply operations. First
# service in the list takes precedence for any
@@ -199,7 +196,7 @@
aviary_dir = os.path.join(self.home, "rpc-defs/aviary")
# The factory will choose an impl that gives us jobs, queries, or both
- # For the moment, we have True/True selecting both hardwired
+ # At present, selecting both is hardwired
aviary_itf = AviaryOperationsFactory("aviary", aviary_dir,
self.aviary_locator,
key=self.aviary_key,
@@ -209,8 +206,9 @@
ops.insert(0, aviary_itf)
else:
log.info("Imports failed for Aviary interface, disabling")
+ else:
+ log.info("No Aviary locator specified.")
-
log.info("%s Aviary interface for job submission and control." % \
((self.aviary_locator and imports_ok) and "Enabled" or "Disabled"))
Modified: trunk/sage/python/sage/aviary/aviarylocator.py
===================================================================
--- trunk/sage/python/sage/aviary/aviarylocator.py 2012-04-17 21:22:17 UTC (rev 5310)
+++ trunk/sage/python/sage/aviary/aviarylocator.py 2012-04-18 15:19:09 UTC (rev 5311)
@@ -2,24 +2,28 @@
import os
from sage.util import parse_URL
-from clients import OverrideClient
+from clients import OverrideClient, TransportFactory
log = logging.getLogger("sage.aviary.locator")
class AviaryLocator(object):
- def __init__(self, datadir, locator_uri):
+ def __init__(self, datadir, locator_uri,
+ key="", cert="", root_cert="", domain_verify=True):
'''
- Constructor.
+ Initialize AviaryLocator so that get_endpoints may be used.
datadir -- the directory containing a wsdl file for the locator
locator_uri -- the URI used to the Aviary locator.
- There are sane defaults for the URI so all that must be supplied
- is the hostname, although scheme, port, and path may also be supplied.
+ There are sane defaults for scheme, port and path so only
+ the hostname is required. Defaults are http, 9000, and
+ 'services/locator/locate' respectively.
'''
- self.locator_uri = self._get_uri(locator_uri)
+ self.transport = TransportFactory(key, cert, root_cert, domain_verify)
+ self.scheme, self.locator_uri = self._get_uri(locator_uri)
self.datadir = datadir
self.wsdl = "file:" + os.path.join(self.datadir, "aviary-locator.wsdl")
+ log.info("AviaryLocator: locator URL set to %s" % self.locator_uri)
def _get_uri(self, locator):
uri = parse_URL(locator)
@@ -30,7 +34,7 @@
uri.port = "9000"
if uri.path is None:
uri.path = "services/locator/locate"
- return str(uri)
+ return uri.scheme, str(uri)
def get_endpoints(self, resource, sub_type):
'''
@@ -38,8 +42,10 @@
See documentation on the Aviary locator for information on
legal values for resource and sub_type.
'''
+ the_transport = self.transport.get_transport(self.scheme)
client = OverrideClient(self.wsdl, cache=None)
- client.set_options(location=self.locator_uri)
+ client.set_options(location=self.locator_uri,
+ transport=the_transport)
res_id = client.factory.create("ns0:ResourceID")
res_id.resource = resource
res_id.sub_type = sub_type
Modified: trunk/sage/python/sage/aviary/aviaryoperations.py
===================================================================
--- trunk/sage/python/sage/aviary/aviaryoperations.py 2012-04-17 21:22:17 UTC (rev 5310)
+++ trunk/sage/python/sage/aviary/aviaryoperations.py 2012-04-18 15:19:09 UTC (rev 5311)
@@ -3,7 +3,6 @@
import logging
import random
import urllib2
-import socket
import string
import time
import sage
@@ -11,17 +10,9 @@
from datetime import datetime
from threading import Lock
from suds import *
-from suds.transport.https import HttpAuthenticated
from sage.util import CallSync, CallThread, ObjectPool, host_list
-from sage.https import *
from aviarylocator import AviaryLocator
-from clients import ClientPool
-try:
- from sage.https_full import HTTPSFullCertTransport
- has_full_cert = True
- technology = sage.https_full.technology
-except:
- has_full_cert = False
+from clients import ClientPool, TransportFactory
log = logging.getLogger("sage.aviary")
@@ -70,6 +61,11 @@
#transport.
class ServerList(object):
+ '''
+ Query an Aviary locator object for endpoints by reource and subtype.
+ Lookup an endpoint of the specified type by machine name.
+ Allow the cached list to be refreshed on demand.
+ '''
def __init__(self, locator, resource, subtype):
self._lock = Lock()
@@ -87,6 +83,10 @@
self.nice = subtype
def _get_host(self, name, servers):
+ '''
+ Lookup a host in a dictionary produced by sage.util.host_list.
+ Return the scheme and URL for the host.
+ '''
scheme = ""
host = ""
if name in servers:
@@ -102,17 +102,27 @@
return scheme, host
def _find_server(self, machine, refresh=False):
+ '''
+ Search the cached server list for machine using self._get_host.
+ If the server list is empty or refresh is True, get a new list
+ of endpoints from the Aviary locator object and generate a new
+ server list.
+ '''
# If we already have a list of values then return that list unless
# refresh is True
self._lock.acquire()
try:
if self.servers is None or refresh:
+ log.debug("AviaryOperations: refresh server list for %s %s" \
+ % (self.resource, self.subtype))
try:
result = self.locator.get_endpoints(self.resource,
self.subtype)
except Exception, e:
result = _AviaryCommon._pretty_result(e,
self.locator.locator_uri)
+ log.debug("AviaryOperations: failed to get endpoints, " \
+ "exception message '%s'" % result.message)
raise result
urls = []
@@ -122,7 +132,7 @@
urls.extend(r.location)
urls = ",".join(urls)
if urls == "":
- log.info("AviaryOperations: locator returned "\
+ log.info("AviaryOperations: locator returned " \
"no endpoints for %s %s" % (self.resource,
self.subtype))
@@ -136,6 +146,12 @@
return scheme, host
def find_server(self, machine, refresh=False):
+ '''
+ Lookup a URL in the cached server list by machine.
+ Generate a new server list if refresh is True, or if the
+ initial host lookup fails and refresh == "on_no_host".
+ Return the scheme and URL for the specified machine.
+ '''
# Update the list if necessary and return host info for machine.
# If refresh is True, the server list will be updated. This is
# used when a previously cached host fails during a remote method
@@ -150,6 +166,8 @@
# it's there now. Think "on demand polling".
scheme, host = self._find_server(machine, True)
if host == "":
+ log.info("AviaryOperations: failed to locate %s on %s" \
+ % (self.nice, machine))
raise Exception("Cannot locate %s on %s" % (self.nice, machine))
return scheme, host
@@ -163,7 +181,6 @@
job_wsdl = "file:" + os.path.join(datadir, "aviary-job.wsdl")
self.job_client_pool = ClientPool(job_wsdl, None)
-
def set_job_attribute(self, scheduler, job_id, name, value, callback, submission):
assert callback
@@ -549,15 +566,17 @@
t = CallThread(self.call_client_retry, my_callback,
query_client, "getSubmissionSummary", subId)
-
- #t = CallThread(query_client.service.getSubmissionSummary,
- # my_callback, subId)
t.start()
class _AviaryCommon(object):
def __init__(self, name, locator,
key="", cert="", root_cert="", domain_verify=True):
+
+ self.transports = TransportFactory(key, cert, root_cert, domain_verify)
+
+ # Log init messages from TransportFactory
+ self.transports.log_details(log, "AviaryOperations")
self.name = name
# Put this here to be referenced by AviaryOperations types
@@ -566,27 +585,6 @@
self.type_to_aviary = self._type_to_aviary()
self.aviary_to_type = self._aviary_to_type()
- self.key = key
- self.cert = cert
- self.root_cert = root_cert
- self.domain_verify = domain_verify
- self.server_validation_possible = has_full_cert
- if self.root_cert == "":
- log.info("AviaryOperations: no root certificate file specified, "\
- "using client validation only for ssl connections.")
-
- elif not self.server_validation_possible:
- log.info("AviaryOperations: server certificate validation not "\
- "supported, using client validation "\
- "only for ssl connections.")
- else:
- log.info("AviaryOperations: using client and server "\
- "certificate validation for ssl connections, "\
- "solution is %s" % technology)
-
- log.info("AviaryOperations: verify server domain against "\
- "certificate during validation (%s)" % self.domain_verify)
-
@classmethod
def _type_to_aviary(cls):
# Need to be able to turn simple Python types into Aviary types for attributes
@@ -606,26 +604,7 @@
# Since we pool the clients and reuse them for different requests
# and since its possible to be using servers with different schemes,
# we have to always reset the transport here.
- if scheme == "https":
- if not os.path.isfile(self.key):
- raise Exception("Private key file "\
- "for ssl communication with Aviary not found")
- if not os.path.isfile(self.cert):
- raise Exception("Client certificate file "\
- "for ssl communication with Aviary not found")
- if self.root_cert != "" and self.server_validation_possible:
- if not os.path.isfile(self.root_cert):
- raise Exception("Root certificate file "\
- "for Aviary server validation not found")
- the_transport = HTTPSFullCertTransport(self.key,
- self.cert,
- self.root_cert,
- self.domain_verify)
- else:
- the_transport = HTTPSClientCertTransport(self.key, self.cert)
- else:
- # this is the default transport when none is specified
- the_transport = HttpAuthenticated()
+ the_transport = self.transports.get_transport(scheme)
client.set_options(transport=the_transport)
def _setup_client(self, client, server_list, name, meth_name):
@@ -664,6 +643,12 @@
def _pretty_result(cls, result, host):
if isinstance(result, urllib2.URLError):
return Exception("Trouble reaching host %s, %s" % (host, result.reason))
+ elif isinstance(result, Exception):
+ if hasattr(result, "args"):
+ reason = result.args
+ else:
+ reason = str(result)
+ return Exception("Operation failed on host %s, %s" % (host, reason))
return result
def call_client_retry(self, client, meth_name, *meth_args, **meth_kwargs):
@@ -673,13 +658,12 @@
try:
result = meth(*meth_args, **meth_kwargs)
except Exception, e:
- result = e
-
- # If we get a URL error, our endpoint may have moved
- # (probably due to a restart on the condor side)
- # Let's try to get new endpoints, reset the client,
- # and try again.
- if isinstance(result, urllib2.URLError):
+ # If we get an exception, our endpoint may have moved
+ # (probably due to a restart on the condor side)
+ # Let's get new endpoints, reset the client,
+ # and try again.
+ log.debug("AviaryOperations: received %s, retrying operation %s"\
+ % (str(e), client.options.location))
self._set_client_info(client, refresh=True)
result = meth(*meth_args, **meth_kwargs)
return result
@@ -735,7 +719,8 @@
def AviaryOperationsFactory(name, datadir, locator_uri,
job_server=True, query_server=True,
key="", cert="", root_cert="", domain_verify=True):
- locator = AviaryLocator(datadir, locator_uri)
+ locator = AviaryLocator(datadir, locator_uri,
+ key, cert, root_cert, domain_verify)
if job_server and query_server:
res = AviaryOperations(name, datadir, locator,
key, cert, root_cert, domain_verify)
Modified: trunk/sage/python/sage/aviary/clients.py
===================================================================
--- trunk/sage/python/sage/aviary/clients.py 2012-04-17 21:22:17 UTC (rev 5310)
+++ trunk/sage/python/sage/aviary/clients.py 2012-04-18 15:19:09 UTC (rev 5311)
@@ -1,7 +1,66 @@
+import os
from suds.client import Client
from sage.util import ObjectPool
+from suds.transport.https import HttpAuthenticated
+from sage.https import HTTPSClientCertTransport
try:
+ from sage.https_full import HTTPSFullCertTransport
+ has_full_cert = True
+ technology = sage.https_full.technology
+except:
+ has_full_cert = False
+
+class TransportFactory(object):
+ def __init__(self, key="", cert="", root_cert="", domain_verify=True):
+ self.key = key
+ self.cert = cert
+ self.root_cert = root_cert
+ self.domain_verify = domain_verify
+ self.server_validation_possible = has_full_cert
+
+ def log_details(self, log, where="HTTPSClient"):
+ if self.root_cert == "":
+ log.info("%s: no root certificate file specified, "\
+ "using client validation only for ssl connections." % where)
+
+ elif not self.server_validation_possible:
+ log.info("%s: server certificate validation not "\
+ "supported, using client validation "\
+ "only for ssl connections." % where)
+ else:
+ log.info("%s: using client and server "\
+ "certificate validation for ssl connections, "\
+ "solution is %s" % (where, clients.technology))
+
+ log.info("%s: verify server domain against "\
+ "certificate during validation (%s)" \
+ % (where, self.domain_verify))
+
+ def get_transport(self, scheme):
+ if scheme == "https":
+ if not os.path.isfile(self.key):
+ raise Exception("Private key file "\
+ "for ssl communication with Aviary not found")
+ if not os.path.isfile(self.cert):
+ raise Exception("Client certificate file "\
+ "for ssl communication with Aviary not found")
+ if self.root_cert != "" and self.server_validation_possible:
+ if not os.path.isfile(self.root_cert):
+ raise Exception("Root certificate file "\
+ "for Aviary server validation not found")
+ the_transport = HTTPSFullCertTransport(self.key,
+ self.cert,
+ self.root_cert,
+ self.domain_verify)
+ else:
+ the_transport = HTTPSClientCertTransport(self.key, self.cert)
+ else:
+ # this is the default transport when none is specified
+ the_transport = HttpAuthenticated()
+ return the_transport
+
+try:
# Some of this stuff does not exist pre suds 0.4.1
# Make it work anyway for testing on such hosts by
# declaring simple OverridesPlugin in the exception case.
12 years, 1 month