Security for JS Developers: A Presentation

other, presentations, security

On Feb 16th, 2016 I gave a presentation to yycjs on security for JS developers. The presentation covers:

The above link will take you to detailed explanations on the topic I’ve done previously — and I’ll work on getting a post developed for the timing attack.

As part of the presentation I used a neat tool called sqlmap which you can read more about. And I also referenced BCrypt multiple times. There are good reads on BCrypt and hashing passwords in 2016.

Creating a Safe Filename Sanitization Function

brakeman, ruby, security

In a previous post on File Access Vulnerabilities I mentioned the use of a sanitize function. Sanitize functions are needed because you don’t always have full control of file names or file paths provided by a user. And when you can’t control file names/paths the attack surface of your application increase.

This post will work through the creation of a file sanitization function, contrast whitelisting vs blacklisting, and look at a gem to handle sanitization.

Let’s start with an example of code that would need a sanitize function:

1
2
3
4
5
6
7
8
def download
  language_code = params[:code]
  send_file(
    "#{Rails.root}/config/locales/#{language_code}.yml",
    filename: "#{language_code}.yml",
    type: "application/yml"
  )
end

This is from a question asked on StackOverflow. The questioner stated that param[:code] was dynamic and couldn’t be determined a priori. They were correct in assessing that this is vulnerable to an attacker submitting an HTTP request with the parameter of: code=../../../config/database. Bam! Compromised database.yml file.

This means that the above function needs to be sanitized so that the system doesn’t get compromised.

Whitelisting vs Blacklisting

There are two main methods you can use to sanitize user input: whitelisting or blacklisting.

  • Whitelisting is the act of setting what characters are allowed.
  • Blacklisting is setting what characters are not allowed.

The distinction is subtle but makes a huge different for security and usability of a function.

Generally speaking you want to use a whiltelisting function before a blacklisting function. This is because whitelists (if done properly) are safer – you’re stating what is allowed vs trying to exclude all the bad things that shouldn’t be allowed. In such a case you’ll typically miss something and viola an attacker has an in. You’re smart, but when someone is motivated they’ll figure out a way to be smarter then you!

This particular instance is nice since the download is restricted to .yml files, meaning you can be extra aggressive in your whitelisting. Let’s write a naive whitelist function:

1
2
3
4
def sanitize(filename)
  # Remove any character that aren't 0-9, A-Z, or a-z
  filename.gsub(/[^0-9A-Z]/i, '_')
end

In the above case, if you used the malicious string ../../../config/database the output is just what you’d want: _________config_database. The slashes and dots are all removed and your database.yml is safe. You could have skipped replacing the ‘bad’ characters with an underscore _, but I prefer underscores since it’s more friendly/readable for the normal, non-attacker use case.

But! (there’s always a but) You’ve got some additional considerations. While the above function is safe, it is limited to a minimal character set. What happens if you inserted any of these characters: é 猪 pig into that function? They get stripped out!

In this case, you’re probably ok with that given the context of the files. You likely have full control of the language files so you can make assertions in your sanitization. But that’s not always the case.

This is where whitelists can become unwieldly. As a programmer you don’t want to go and define every single character that you want to allow; that’s tedious. That’s where the blacklist function comes in. Let’s see that:

1
2
3
4
5
6
7
8
9
def sanitize(filename)
  # Bad as defined by wikipedia: https://en.wikipedia.org/wiki/Filename#Reserved_characters_and_words
  # Also have to escape the backslash
  bad_chars = [ '/', '\\', '?', '%', '*', ':', '|', '"', '<', '>', '.', ' ' ]
  bad_chars.each do |bad_char|
    filename.gsub!(bad_char, '_')
  end
  filename
end

Using the function, with some weird input: 猪<lǝgit> "input" °?I |s:*w*:é::ä::r: /\.?%ʎן octopus you get the following back (results may vary by OS): 猪_lǝgit___input_°__I__s__w__é__ä__r______ʎן octopus . And while this isn’t the prettiest filename, it’s what the user wanted!

This code is more complex than the whitelisting sanitize, and it’s more permissive. It’s also more user friendly since it’s giving the user what they put in.

Alternatives

The last piece to mention is alternatives. If you’re looking for a good gem that does this for you I’d recommend Zaru. It handles the same “bad characters” as the blacklist sanitize above, and also handles some windows edge cases for reserved words. Plus it’s got a test suite, which is a comfort when you’re looking at filename sanitization!

Security for Ruby Developers: A Presentation

presentations, rails, ruby, security

On Jan 5th, 2016 I gave a presentation to YYCRuby on security for Ruby developers. The presentation covers:

If you weren’t present, the slides probably won’t make a whole lot of sense. The above links will take you to detailed explanations of the topics I’ve done previously.

I mentioned in the presentation some fantastic external tools that you can use to secure your app. They are:

How Do Ruby/Rails Developers Keep Updated on Security Alerts?

rails, ruby, security

It’s one thing to know about SQL Injection, File Access Attacks, XSS, and other security hazards. And because you’re a great developer you’re regularly squashing vulnerabilities in your app. And yet every Ruby/Rails developer is relying on someone else’s code to do their work. And guess what? No matter how careful you are, no matter how much time you spend perfecting your code, someone else’s code is going to have a security bug (yours will too if you & I are being honest with each other!)

One of the questions that a few people have emailed me with now is: “How do I stay up to date on Ruby/Rails security?” It’s a great question! First because not enough developers care about security. Second because there are a lot of great tools out there to help protect your app. Let’s look at how you can stay up to date.

Follow Relevant Mailing Lists

The first step in keeping your app up to date and protected is to keep up with the news. The two main sources of security news are the Ruby Security Mailing List and the Rails Security Mailing List. Both lists focus on security and will give you the best warning that an attack/fix is coming down the pipe.

Follow CVE Reports

Now the Ruby, and Rails mailing lists are great, but you have A LOT more dependencies in your app than that: nokogiri, rack, thin, puma, etc.. Unless there was a major issue in these gems they’re not going to make the Rails or Ruby mailing list, so you need to get that information from elsewhere!

One of the little know resources for keeping up with security vulnerabilities are CVE databases. There are a few different sites that offer this type of information and CVE Details is my favorite because it’s easy to consume the information.

Ruby and Rails both have dedicated pages, and you can create an RSS feed of those pages. And for the major gems in your site navigate to their pages and create an RSS feed for them as well!

Keep Code Updated

A simple way to keep your application up to date with the latest vulnerabilities is to not let your dependencies become outdated. To do this run bundle outdated on your codebase and update the gems that are out of date.

This is usually easier said than done because updating dependencies can cause your application to break in unexpected ways. The mitigation for that is keeping your tests up to date. If you can update a gem, run your test suite, and nothing breaks (assuming >85% coverage), then you’re likely in a good spot to roll that dependency upgrade into production after some QA.

Process

A piece of advice that I’ve read about security from Thomas Ptacek who founded Matasano, a security consultancy. His advice was:

Put someone on your team in charge of tracking your dependencies (C libraries, Ruby gems, Python easy_install thingies) and have a process by which you periodically check to make sure you’re capturing upstream security fixes. You should run your service aware of the fact that major vulnerabilities in third-party library code are often fixed without fanfare or advisories; when maintainers don’t know exactly who’s affected how, the whole announcement might happen in a git commit.

I like this advice because it’s easy. Start by having a security day with your team. Buy some pizza and beer and go through your Gemfile.lock querying the CVE Details database and reviewing the gem’s repository. Then triage any issues and schedule fixes.

Why get the whole team together vs one person? Personally I’m a fan of having the entire team involved in security since it creates a culture of good practices vs just a single developer.

Tooling

The above processes sound cumbersome and manual (which some of it is going to be), so there are tools that you can leverage to automate this type of work for you.

Bundler Audit is one of the nicer tools. It uses the rubysec advisory db to check for vulnerable gems in your Gemfile.lock file, along with gem source issues. And it’s as easy as running the bundle-audit command.

This is a nice gem because it takes the research leg out of your dependency updating. And bonus points since it can fit in nicely as a CI build step. There are also paid versions that audit gem files as well like: AppCanary, Hakiri, and Gemnasium.

Why Stay Updated With Security?

Wrapping up, I want to emphasize that it is important to keep dependencies up to date. Security is sometimes a tough effort to justify because when it’s working you’ll rarely notice.

And with your apps security it doesn’t pay to be complacent:

Heard about a vulnerability? The adversary is not a stressed human like you. It’s a for loop. The vuln is not secret; after all, you know.
  – Patio11 (Patrick McKenzie)

Fixing File Access Vulnerabilities in Ruby/Rails

brakeman, rails, ruby, security

Following up on a previous post about Command Injection Vulnerabilities, this post is going to look at File Access Vulnerabilities.

File Access vulnerabilities fall under the category of Insecure Direct Object Reference vulnerabilities in the OWASP top 10 lists. And for 2010 and 2013, Insecure Direct Object vulnerabilities were number 3 for both years. tada confetti_ball fireworks

What is a File Access Vulnerability?

A File Access vulnerability is when an attacker can use various calls to create, modify, or delete files on your server’s file system or a remote file system (eg: S3) that they shouldn’t have permission to modify. Here’s an example of a call that would allow an attacker to link your database file into the public directory of a Rails server:

1
2
3
4
5
6
7
8
9
# http://domain.com?payload=config/database.yml
payload = params[:payload]

path = Rails.root.join(payload)
id = SecureRandom.uuid

File.link(path, "public/#{id}")

redirect_to "/#{id}"

While this example is contrived and code in the wild is not likely to look this obvious, the above is a perfect example of how this type of attack functions. The attacker is able to manipulate your code into linking (and therefore exposing) a file that you wouldn’t want leaked.

Now the difficult thing is that there are an enormous number of methods that are vulnerable to File Access attacks. Pulling from the Brakeman source code, we can create a list of methods where a File Access vulnerability could occur:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
# As of Oct 25, 2015
# From: https://github.com/presidentbeef/brakeman/blob/d2d49bd61f2d77919df17fd8dce6193cf1d1ada2/lib/brakeman/checks/check_file_access.rb#L11-L27

# Dir:
Dir[]
Dir.chdir
Dir.chroot
Dir.delete
Dir.entries
Dir.foreach
Dir.glob
Dir.new
Dir.open
Dir.rmdir
Dir.unlink

# File
File.delete
File.foreach
File.lchmod
File.lchown
File.link
File.new
File.open
File.read
File.readlines
File.rename
File.symlink
File.sysopen
File.truncate
File.unlink

# FileUtils
FileUtils.cd
FileUtils.chdir
FileUtils.chmod
FileUtils.chmod_R
FileUtils.chown
FileUtils.chown_R
FileUtils.cmp
FileUtils.compare_file
FileUtils.compare_stream
FileUtils.copy
FileUtils.copy_entry
FileUtils.copy_file
FileUtils.copy_stream
FileUtils.cp
FileUtils.cp_r
FileUtils.getwd
FileUtils.install
FileUtils.link
FileUtils.ln
FileUtils.ln_s
FileUtils.ln_sf
FileUtils.makedirs
FileUtils.mkdir
FileUtils.mkdir_p
FileUtils.mkpath
FileUtils.move
FileUtils.mv
FileUtils.pwd
FileUtils.remove
FileUtils.remove_dir
FileUtils.remove_entry
FileUtils.remove_entry_secure
FileUtils.remove_file
FileUtils.rm
FileUtils.rm_f
FileUtils.rm_r
FileUtils.rm_rf
FileUtils.rmdir
FileUtils.rmtree
FileUtils.safe_unlink
FileUtils.symlink
FileUtils.touch

# IO
IO.foreach
IO.new
IO.open
IO.read
IO.readlines
IO.sysopen

# Kernel
Kernel.load
Kernel.open
Kernel.readlines

# Net::FTP
Net::FTP.new
Net::FTP.open

# Net::HTTP
Net::HTTP.new

# PStore
PStore.new

# Pathname
Pathname.glob
Pathname.new

# Shell
Shell.new

# YAML
YAML.load_file
YAML.parse_file

That’s a nasty long list. And what it means is when you make one of these calls, if you’re using input a user controls then they can attack your system!

To top it all off, there are numerous different types of attacks that could performed. They’re all dangerous and slightly different:

1
2
3
4
5
6
7
8
9
Filling up disk space:                         FileUtils.copy, FileUtils.cp, File.new, IO.new, PStore.new
Move a file to a downloadable location:        File.rename, FileUtils.move
Linking a file to a downloadable location:     File.link, File.symlink, FileUtils.link, FileUtils.ln
Bricking your server (DoS):                    Dir.delete, FileUtils.rm
Changing permissions to directories (DoS):     File.chmod, File.chown, FileUtils.chmod, FileUtils.chown
Renaming key files:                            File.rename, FileUtils.move, FileUtils.mv
Leaking paths:                                 FileUtils.pwd
Downloading malicious files onto your server:  Net::FTP.new, Net::HTTP.new
Launch an attack against another website:      Net::FTP.new, Net::HTTP.new

Some are more harmful than others and typically an attacker is going to leverage one or more of these vulnerabilities to escalate their privileges to own your system. From wikipedia:

Privilege escalation is the act of exploiting a bug, design flaw or configuration oversight in an operating system or software application to gain elevated access to resources that are normally protected from an application or user.

How do you Fix File Access Vulnerabilities?

The best technique for preventing File Access vulnerabilities is not allowing them happen in the first place and avoiding unnecessary system level operations.

Thanks Captain Obvious! facepalm

While that advice is correct, it’s not necessarily good or helpful, so let’s look at the techniques you can use to keep File Access attacks from happening when you do need to work with your system.

Restriction via Identifier

The first way to do that is by using an identifier to refer to files on disk. This identifier will take the form of an id, hash, or GUID.

1
2
3
4
5
6
7
8
9
10
# HTML
<select name="file_guid">
  <option value="690e1597-de8d-4912-ac04-d0e626f806f4">file1.log</option>
  <option value="2e157fa3-ea1e-4b46-931e-c0f8b10bfcb2">file2.log</option>
  <option value="fffb938b-07bc-472c-a48f-383123a9f04d">file3.log</option>
</select>

# Controller
download = FileDownload.find_by(file_guid: params[:file_guid])
send_file(download.path, filename: download.name, type: "text/plain")

Notice in the above code that a GUID is used as the value that gets submitted to the server, and not the actual file name. This makes it impossible for an attacker to download a file they’re not allowed to, and also keeps you safe from any manipulation of the file name or path. This technique will work for moving, deleting, renaming, and sending files as long as you know files names and paths ahead of time. It is the best way to secure your app.

Partial Restriction

Ideally you wouldn’t have to resort to any other techniques for protection, however the real world is a bit messier. And sometimes you don’t have all the information you need in order to use an identifier. In cases like this you want to “sandbox” your users as much as possible by limiting access within the file system:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# HTML
<select name="file_name">
  <option value="file1.log">file1.log</option>
  <option value="file2.log">file2.log</option>
  <option value="file3.log">file3.log</option>
</select>

# Controller
file_name = sanitize(params[:file_name])

# if possible current_user.download_directory should be an identifier
# and controlled 100% by the server.
download_path = "downloads/#{current_user.download_directory}/#{file_name}"

if File.exists?(download_path)
  send_file download_path, filename: file_name, type: "text/plain"
else
  # return an error message
end

Here you can use a sanitize function to clear params[:file_name] of any dangerous characters. In this way you’re accessing the file system in a controlled manner.

Filtered Restriction

The next technique to limit file access trouble is by restricting to specific file types. Here you want to whitelist the types of files that a user can access, such as only .pdf files on the server:

1
2
3
4
5
6
payload = sanitize(params[:filename])
if payload =~ /.pdf$/
  send_file("downloads/#{payload}", filename: 'report.pdf', type: "application/pdf")
else
  raise "Unknown file format requested"
end

This is a line of defense that makes sure that you’re not leaking any sensitive information like a database.yml file. And again make sure to use a sanitize function!

The place you have to be careful here is that whitelisted file extensions can be exploited if an attacker is able to move or rename files. Specifically if they are able to add a .pdf extension to database.yml then they’re able to download the database.yml.pdf file. That’s where multiple vulnerabilities come in as mentioned before. An attacker uses one File Access vulnerability to rename the file, and another to download it.

Store User Files on a Different Server

These days disk space is cheap. One great way to avoid opening your web server up to compromise is to limit data stored on the system. This means leveraging tools like Amazon S3, or DreamHost’s Dream Objects to store user files, generated reports, etc. on a server that is loosely coupled to your app.

As I mentioned in the opening paragraph of this post, you can still shoot yourself in the foot and have an attacker gain access to files they shouldn’t with external storage. Storing your files externally simply separates systems (called a boundary) so that a compromise of your data storage system, doesn’t also compromise your web server.

The added benefit of storing data on other servers is that it will help you scale the load your servers can handle and can reduce processing cycles for those files.

Use an Intermediary

One of the great tools to come out of the “dev ops revolution” is Chef. I use Chef on a regular basis and within the apps that I develop our team uses Chef to manage server configurations. With Chef you can create a boundary between your Ruby/Rails app and your configuration code. Then when you’re passing information between the web app and chef you can ask yourself: “Is this data dangerous?” It’s a subtle distinction and if you’re doing enough system calls it’s worth the investment.

But before you jump on the Chef bandwagon, having an intermediary isn’t going to solve the File Access problem. At the end of the day you’re going to need to pay attention to what you’re doing. The nice bit about Chef is that you can come up with ground rules on your team like:

  • No system calls in main app, only in Chef
  • Heavily sanitized user input, used sparingly in Chef
  • Code Review by two or more people for Chef changes
  • Quarterly review of chef code for vulnerabilities

You get the idea, create a separation of concerns between safe code and hazardous code!

Use Dangerous Methods Sparingly

There’s a good chance that a lot of the methods listed above won’t be useful for you. And really that’s the best case scenario. At the end of the day, not using a dangerous method is the #1 technique for keeping your app safe.

When you’re being asked to implement the amazing new feature that involves file access, you can provide constructive feedback on potential harms that these types of features can bring to the table.