JASIG CAS for osTicket

Update 2015-06-03: I have moved this plugin to its own project. The post below has been updated to reflect this.

Back when I was working at RPI I had setup a ticketing system to handle the volume of support related requests that were coming in via e-mail. I turned to osTicket but their authentication system has always been a bit.. err.. not user friendly. Given that many college campuses, including my own, utilize CAS I figured it was time to get that hacked into osTicket. Thankfully osTicket has built a plugin system that is fairly easy to use, albeit undocumented.

I wrote a nice PR for the support to go in but given that there has been radio silence since proposing it, I have decided to document how to include it on your own instance.


  • CAS extended attributes for user name and e-mail addresses.
  • Optionally appending a suffix to user names to allow mapping to e-mail addresses.
  • Login for both agents and clients (can be toggled for neither, either, or both).
  • Certificate validation (can be disabled for testing).
  • Auto creates clients if not already in osTicket.

How to install

  1. Download the source or compiled PHAR package.
  2. If you downloaded the PHAR package skip to step #6.
  3. Expand the downloaded compressed container.
  4. Clone core-plugins into another directory.
  5. In the expanded folder run php -dphar.readonly=0 ../core-plugins/make.php build auth-cas
  6. Move the auth-cas.phar file to your <osticket root>/include/plugins/
  7. Login to the SCP of osTicket and navigate to Admin Panel > Manage > Plugins
  8. Select Add New Plugin
  9. Install the JASIG CAS Authentication plugin
  10. Click on the plugin title to configure the plugin
  11. Once configured go back to the plugins menu and enable the plugin


  • If in production please do not leave phar.readonly = Off in your php.ini file. Heck, don’t even build the package on your production instance.
  • If you get PHP errors after enabling the plugin you can manually delete the auth-cas.phar file from your plugin directory.

Don't Commit Your Passwords

There’s a fairly popular post on the frontpage of hacker news today that involves a developer that mistakenly committed a configuration settings file that includes the path for a repository artifacts service. He also happened to include his username and password. In plaintext. Publicly accessible. Indexed by Google.

It’s should be obvious why this is a big no-no, but it’s actually fairly common. It should be clear that config files should never be committed into a repository. You may commit sample config files to define the structure of the configuration, but the live configuration should not be present. There are some exceptions to this rule, but you should never commit a setting that could be changed for particular environments. This tends to be popular when if you have base settings which get extended for a particular deployment (for example, having a base settings file, then a base dev environment settings file, then finally having the particular deployment config that extends in that order).

Use .gitignore

Whenever you start a project, you should know how your settings are going to be named or the folder that they will be present in. Immediately add those those to your .gitignore. Don’t slack on this, someone will did a git add . at some point.

However, all developers should be aware of their development environment. Sublime Text is a fairly popular editor right now and many people are using a rather large collection of plugins. You should be aware of artifacts these plugins leave in your projects and immediately add them to your global gitignore.

Is this really a big issue?

Yes. Using my example of Sublime Text, the FTP Sync plugin is fairly popular. Unfortunately the plugin leaves a ftpsync.settings file as an artifact. The file also includes your username and password for your FTP server in plaintext. As such a simple advanced search on Github yields 86 code matches for the file ftpsync.settings, most of which including host, username, and password in plaintext.

Very similar results can be found if you can think of any artifact files that might be not be ignored.


So people might be wondering what this project is an how it’s coming along. I wanted to address the project formally and give some updates.

What is this?

Playground is my new project to try to improve the system for accepting raw code submission, exectuing them, and then returning the output. As an exentension, we can then diff that output or execute another program using that output as the STDIN.

So really this is a project to make a student code submission service. I wanted to do this to help improve the system that is in place at RPI.

I have a few main things that I want to get out of this program:

  1. Completely sandboxed submission execution
  2. Distributed workers to submissions can be processed in parallel
  3. Allow scripting of result checking
  4. Limit as much as possible at the kernel level

So what’s going on?

Currently I’m playing around with getting a secure sandbox for executing the code. To that end I’ve been experiment with SELinux, LXC, PAM, and Docker to get a nice combination of what I find to be secure and give the level of customization I want.

Some of the issue I look into when exploring this options is ensuring that I can get a sandboxed filesystem, I’ll have the ability to limit a process’ execution in terms of CPU time and memory utilization. Each package has its ups and downs and are honestly very very feature rich with lacking documentation.

My current setup

Right now I’ve settle on a setup using a host machine (I currently use a VM for this) that employs SELinux and PAM to limit a process’ abilities. Right now my worker recieves a submission, hashes the task id (pretty much to ensure I get a valid username), then I useradd -d /home/%user% to get a valid user account to execute as. From there I setup a separate tmp directory for it to use.

From there I wrap the execution of the process with seunshare to override the $HOME and $TMP of the process execution context with my restricted directories created just for this user. Then further wrap with ulimit to limit CPU time and max memory. From there I capture STDOUT, STERR, and the return code and pass that back over to the MQ.

Wait wait wait

You said you want separate, isolate filesystems? And where’s Docker and LXC in this?

Well, I compromised. You see, having that level of isolation is nice, but at a certain point I’ve gone overkill. SELinux and PAM are currently providing more than enough security without a large overhead. Plus, I would still need PAM and SELinux even if I isolated further with LXC. The problem became that sure, I could actually limit the LXC container in terms of memory, but that’s not actually what I wanted. I want to limit a specific process, not the entire container.

Why I’m happy with this

Currently I can explicitly state how long a process can run on a CPU (this is not equal to runtime, sleep doesn’t use any CPU time) and set memory limits with the process thinking the kernel is the one restricting it. For example, I’m not running kill -9 on your proc after you pass the memmory limit, rather your process actually thinks it’s out of memory.

Going forward

Right now I only have support for Python. I will eventually expand this out and currently plan to add compiled languages. So support this I’m going to break out each language into separate queues (and possibly further by major version) so that not all agents have to have all the operating environments.

Furthermore I’m going to continue working on my SELinux config to tighten down what processes can and cannot do.

I also need to greatly expand the options that can be decided when a process is to be run, this mainly includes setting up a runtime environment to handle the input for the process.

After I’m comfortable with all that then I need to begin working on the checker system. This is going to branch into two direction: static and dynamic checking. Pretty much I’m either going to statically diff your output with expected results, or I’m going to pipe your output into another program that will either give a return code 0, you’re good, or anthing else if you failed.

Finally I’ll whip up a nice interface to tie this whole thing together.

Stream to Euro Truck Simulator 2

Oddly enough, I’ve recently become addicted to Euro Truck Simulator 2, a very good trucking simulator. One of the great features of the game is that it has built in suppport to play MP3 streams within the game. It comes prepacked with a bunch of European stations, but I wanted to add some of my local stations that are on iHeartRadio.

This led to two problems: finding the stream URL and then transcoding the stream to MP3.

Getting the stream

This is a very varying step since each stream might be streamed through different applications, but in general you can do some packet inspection to find out the stream source. In my case, someone very kindly compiled a TSV file of stations on iHeartRadio along with their stream URLs. Unforauntely, these streams are not given in MP3.


One of my favorite applications that is very commonly installed these days is VLC which came quite in handy here. Once you have the stream URL, follow these steps:

  1. In VLC go to Media -> Stream…
  2. Select the Network tab and enter the URL
  3. Click Stream button
  4. On the next screen click on Destination Setup
  5. Under Destinations -> New Destination option select HTTP
  6. Click Add
  7. Leave the default options or change Port from 8080 if it’s in use
  8. Ensure that Active Transcoding is checked.
  9. Under Transcoding options -> Profile select Audio - MP3
  10. Click Stream
  11. Note that the stream URL is now http://localhost:8080 (or another port you selected)

Simply repeat these steps for any stream you want transcoded. Once you have setup this stream in the game you will not have to redo the configuration in the game.

Adding stream to game

  1. Open My Documents -> Euro Truck Simulator 2. Open the file live_streams.sii in a text editor.
  2. Copy and paste the last line that looks like stream_data[###]: “http://someurl:8080| Name” on the line below it.
  3. Change the ### to the next sequential number and the URL to be a valid MP3 stream. Change the stream name after the | (pipe) to be Localhost or another distinctive name.

Request Tracker - Map AD OU to Queue

While working for a school district that ran a windows shop I decided to branch out and use Request Tracker for support tickets. The software allowed us to handle issue more efficently, however, it had a major drawback in that someone would need to manually assign tickets that came in to the proper queue. Since each school building had its own tech(s) we gave each building a queue and then made that buildings tech(s) masters of those queues. The manual labor of assigning tickets was definitely an issue.

By default Request Tracker’s LDAP integration will just download the data from LDAP when a user is created and that’s about it. We needed it to use LDAP info when a ticket is being created to map it to the proper queue. To do this we needed to use a Scrip (yes, without the t). Scrips are RT’s answer to exentsionability, sadly written in Perl.

Alas, I decided to tackle this challenge with the following Scrip.

Final Code


You’ll need to add a Scrip by going to Tools -> Configuration -> Global -> Scrips -> Create and fill out the form with the following information:

Give any description you want that let's you know what this Scrip does. **Prefix your description so that it will be first when sorted alphabetically (this is the order in which scrips are run).** This is important since you'll want to change the queue before the Scrip that notifies the queue is run.
On Create
User Defined
Global Template: Blank
Custom Condition
Leave empty
Custom action preparation code
Fill this with code from above (make sure to fill in your configuration)
Custom action cleanup code
return 1;

Now save and you should be all set.

Note: This has only been tested on 4.0 <= RT version <= 4.0.13.