<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Bryan Rite]]></title><description><![CDATA[I’m a full stack software developer working mainly in Ruby. I write occasionally about my work, best programming practices and other interests.]]></description><link>http://bryanrite.com/</link><generator>Ghost 0.11</generator><lastBuildDate>Tue, 17 Mar 2026 18:41:01 GMT</lastBuildDate><atom:link href="http://bryanrite.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Simplifying Complex Rails Apps with Operations]]></title><description><![CDATA[<p>I work on several large and mature Rails applications and have recently been feeling a lot of pain as these applications become more and more complex.</p>

<p>I started examining where these issues were occurring in our code bases, taking a hard look at how we got there, and doing lots</p>]]></description><link>http://bryanrite.com/simplifying-complex-rails-apps-with-operations/</link><guid isPermaLink="false">ece42ae9-ded4-4427-b0b0-b787c4bb4538</guid><category><![CDATA[Ruby on Rails]]></category><category><![CDATA[Refactoring]]></category><category><![CDATA[Operations]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Fri, 11 Nov 2016 08:38:00 GMT</pubDate><content:encoded><![CDATA[<p>I work on several large and mature Rails applications and have recently been feeling a lot of pain as these applications become more and more complex.</p>

<p>I started examining where these issues were occurring in our code bases, taking a hard look at how we got there, and doing lots of research of why these things are they way they are.</p>

<p>It wasn’t until I came across Piotr Solnica’s <a href="https://solnic.codes/2015/12/07/introducing-dry-validation/">dry-validation</a> gem and some of the ideas behind Nick Sutterer’s <a href="http://trailblazer.to/">Trailblazer</a> framework that I was able to piece together several seemingly separate problems that ultimately are best solved with Nick’s concept of an “Operation”.</p>

<p>I first want to discuss a few of the pain points with complex Rails applications and ultimately how the idea of an Operation can solve some of them.</p>

<p>Short on time? <a href="http://bryanrite.com/simplifying-complex-rails-apps-with-operations/#tldr">tl;dr</a> or listen:</p>

<iframe width="100%" height="100px" src="https://everlit.audio/embeds/artl_0xPk5YC0KJy" title="Everlit Audio Player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

<h4 id="userinteractionpersisteddatarepresentation">User Interaction &amp; Persisted Data Representation</h4>

<p>Rails, by default, fosters a coupling between how the user interacts with your system and how data is persisted. When your UI or API starts to interact with more than one ActiveRecord Model in a single action, things quickly start to fall apart and you get stop gaps like the oft-maligned <code>accepts_nested_attributes_for</code> or <code>validate_associated</code>. You start passing attributes for models through other models, none of which necessarily have anything to do with the other.</p>

<p>Actions your user’s can perform and how you store data are not the same thing, so why are our UIs and forms married to our data store schemas? It may start out that way when you scaffold and CRUD your initial models, but as your application and business logic grows more complex, it tends to move away from strict data-entry.</p>

<p>Your UI and business logic should not be coupled with your ActiveRecord Models. ActiveRecord Models are a representation of your persisted data, they are not a place for business logic; they are <em>used</em> by business logic to persist data. Decoupling your business logic and the way you access data allows your business domain to manipulate data without causing a ripple effect across other processes. ActiveRecord models should be entirely devoid of logic except perhaps to enforce some internal consistency, such as associations, a state machine, or data-store-enforced restrictions (not null, unique, etc).</p>

<p>Your models are the building blocks of how your business processes work within your system, and if they do anything other than keep themselves consistent and persistent, they can become hard to reuse in all business contexts.</p>

<h4 id="activerecordandvalidation">ActiveRecord and Validation</h4>

<p>Reading Piotr Solnica’s blog post <a href="https://solnic.codes/2015/12/28/invalid-object-is-an-anti-pattern/">Invalid Object Is An Anti-Pattern</a> and some of the points he made got me re-thinking about how validations should work, especially in the context of a large, complex system.</p>

<p>As explained above, ActiveRecord Models are a representation of your persisted data. Being able to represent <em>something invalid</em> as <em>persisted data</em> is a real problem… more so when you’re delegating through many levels of service objects and POROs, where you don’t have any assurances of where the object came from and its internal consistency… and you shouldn’t have to care, but as Piotr says so eloquently:</p>

<blockquote>
  <p>You can’t treat them [ActiveRecord models] as values as they are mutable. You can’t really rely on their state, because it can be invalid.</p>
</blockquote>

<p>As a result, you can’t trust the data that you are passing around your application. How many <code>if</code> or <code>present?</code> statements have you thrown around in your code to deal with incomplete models?</p>

<p>ActiveRecord models should be limited to valid and persisted data, and the act of validating input to create or update this data made in a different context.</p>

<h4 id="contextualvalidation">Contextual Validation</h4>

<p>At first, validation seems pretty simple. Your models <strong>always</strong> adhere to your rules. It must <strong>always</strong> have this value present, it must <strong>always</strong> have an integer greater than 5, and so on. These rules live in your model, to enforce its internal state.</p>

<p>As your application grows more complex, <strong>always</strong> gives way to <em>usually</em>. <em>Usually</em> it has to be greater than 5, but an admin can set it to anything. Requests from the UI can return a maximum of 10 records, but API requests can return a maximum of 50.</p>

<p>Validation turns out to be very contextual. What is valid can depend a lot on who is doing the changes, where they are doing it, when they did it, and the values of other models at time. We can use <code>if</code> and <code>unless</code> in our ActiveRecord Models to solve some of these problems, but we start to leak knowledge of other models, user roles, and business logic into our model supposedly only used for interacting with persisted data.</p>

<p>There is a difference between what is <strong>always</strong> enforced for a model and what is enforced from the business domain’s perspective.</p>

<p>When is <strong>always</strong> true? To me, only things enforced by the data store, such as uniqueness indexes or <code>NOT NULL</code> attributes. These things have to be true, otherwise the model won’t save. Any other value is a business domain decision and its validation can be dependent on the context the data was received.</p>

<h4 id="domainprocesses">Domain Processes</h4>

<p>The Code Climate article <a href="http://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/">7 Patterns to Refactor Fat ActiveRecord Models</a>, is an excellent place to start learning how to extract business logic from ActiveRecord models. It helps you build up a layer of PORO objects, implementing all of your business logic and sitting between the UI/API/CLI and your persisted data.</p>

<p>In big, complex systems, it is often difficult to achieve this with consistency and interacting with your business layer to perform actions can require a lot of specific domain knowledge. While individual classes may have intention-revealing names and are beautifully architected, the entire process cannot be effectively communicated this way.  This happens when your CTO drops into the CLI to credit a user’s balance and forgets to add an activity entry or kick off a related background worker.</p>

<p>These separate, decoupled but related pieces comprise of a high-level <em>function</em> performed on your system; coordinating many processes into a single, repeatable unit of action. You don’t want to tie balance updates together with creating activity feed entries, but there is a need to orchestrate the two together in the act of crediting an account.</p>

<h4 id="solvingtheseproblems">Solving These Problems</h4>

<p>The 4 problems we’ve covered are solvable on their own:</p>

<ul>
<li><p>Form Objects, provided by gems like <a href="https://github.com/makandra/active_type">ActiveType</a>, <a href="https://github.com/apotonick/reform">Reform</a>, and <a href="https://github.com/solnic/virtus">Virtus</a> can decouple how user’s interface with your system and how your data is stored, and provide somewhere for contextual validation.</p></li>
<li><p>The <a href="http://dry-rb.org/gems/dry-validation/">dry-validation</a> gem can be used to validate input outside of an ActiveRecord Model instance and build trust in the persisted data you pass around.</p></li>
<li><p>Organizing a top-level layer of service objects to orchestrate the processes and encapsulate the business rules, that then delegate to more specific classes… and the discipline to ensure you use it.</p></li>
</ul>

<p>The benefit of an Operation is that it wraps <em>all</em> of these pieces, validation, UI decoupling and a business layer, together into a single, consistent entry point for performing an action on the system; whether that be via a form on the UI, a JSON API call, a process kicked off by cron, or a meddlesome CTO in the Rails console.</p>

<h4 id="whatisanoperation">What is an Operation?</h4>

<p>The Trailblazer framework <a href="http://trailblazer.to/gems/operation/">defines an operation</a> very well:</p>

<blockquote>
  <p>… an operation embraces and orchestrates all business logic between the controller dispatch and the persistence layer. This ranges from tasks as finding or creating a model, validating incoming data using a form object to persisting application state using model(s) and dispatching post-processing callbacks or even nested operations.</p>
  
  <p>Note that operation is not a monolithic god object, but a composition of many stakeholders.</p>
</blockquote>

<p>The last bit is key: an Operation itself does nothing. Everything is delegated but it provides a common, <em>functional</em>-style interface to your application and can orchestrate many separate actions into a cohesive business process.</p>

<p>In essence, you create a sort of DSL for interacting with your business layer that describes <em>what</em> you want to do, hiding the <em>how</em>.</p>

<h4 id="usingoperations">Using Operations</h4>

<p>A lot of this discussion has been conceptual and abstract. What does an operation look like in implementation and how does it solve all of these problems?</p>

<p>Treat an operation like a function: you pass a hash of simple input to a class method which runs the operation and returns an immutable instance of itself. This resulting instance can be used to render the UI or serialize a JSON response based on the result.</p>

<p>What the operation does is completely up to you, but it is likely you will have it orchestrate the authorization of the request, validation of the input, execution of the tasks and return the results. </p>

<p>For example, we have a form that contains some extra details about the user when they sign up for our website. The operation might look like this:</p>

<p><img src="http://bryanrite.com/content/images/2016/05/operation-diagram.svg" alt="Operation Diagram"></p>

<p>Our form's input span several models. Our input doesn't conform to any nested hashing based on how we are storing the data; even through a <code>User</code> may have a <code>has_many</code> relationship with <code>SocialAccounts</code>, our sign up form or JSON API doesn’t care. </p>

<p>The Operation defines a validation <em>contract</em>: what values it will accept and the rules that validate them. A password confirmation and a twitter handle are only required when a <code>User</code> with the <code>customer</code> role is created via our registration form. The <code>User</code> model doesn’t need to know about these rules and adding a form for creating <code>admin</code> users with different validations won’t be affected.</p>

<p>Once authorized and valid, the Operation executes the business logic. This could be directly creating ActiveRecord models, delegating to service objects, external API calls, creating background jobs… whatever is required. The operation just delegates the actual work to other objects but orchestrates the process as a whole.</p>

<p>Now, when you need to implement a CSV bulk customer import, maybe you don’t want to send birthday emails, or the password confirmation isn’t necessary; you can create another Operation that captures the specific process of the CSV import and its specific validation rules without duplicating any of the underlying functionality.</p>

<h4 id="whatbenefitsdoweget">What Benefits Do We Get</h4>

<p>By organizing our code this way, there are quite a few benefits.</p>

<ol>
<li><p>Operations are not tied to models in any way, they are functional business processes. Each operation can validate, authorize and process itself in its own context and returns an immutable and stateless result.</p></li>
<li><p>Underlying data and models can be refactored without changing the operation’s public interface… or we can add more parameters, processes, and models without affecting what it already does, or other similar operations.</p></li>
<li><p>Controllers become ultra thin. They care nothing about validation, authorization, loading, creating, or updating. They know nothing about parameters (bye bye <code>strong_parameters</code>). They simply call the operation and render or redirect based on the result.</p></li>
<li><p>You end up with a folder full of performable actions. Onboarding new developers, understanding what your application can do and interacting with it becomes more clear.</p></li>
<li><p>Operations are the perfect target for acceptance/integration tests. They outline an entire, repeatable process, tying many underlying pieces together and the only way a user interacts with your entire system.</p></li>
<li><p>You can get rid of fixtures and/or <code>factory_girl</code>. Setting up your tests by creating (and then maintaining) the underlying data models is not how data is created in production. Your test can use the list of operations to build up a scenario exactly the same way it would occur in production… No maintenance required!</p></li>
<li><p>Operations are stateless with simple input, making it easy to run them inline or as asynchronous jobs.</p></li>
</ol>

<h4 id="operationimplementations">Operation Implementations</h4>

<p>Nick’s Trailblazer framework encapsulates a lot of this, including the ability to use <code>dry-validation</code> with his <code>reform</code> form object library, along with some other great ideas. You would be well served to take a look at it and understand the reasons behind the choices made in the framework.</p>

<p>He has recently extracted <a href="https://github.com/trailblazer/trailblazer-operation">trailblazer-operations</a> into its own separate gem, so you can grab it without needing to dive into the entire Trailblazer Framework.</p>

<p>I have authored a small, low-ceremony gem <a href="https://github.com/bryanrite/operational">Operational</a> that tackles the same problem but relies on Rails conventions rather than being framework agnostic, resulting in a powerful but simple library with much less code and no dependencies. It might be what you're looking for.</p>

<p>Do you know of any other implementations of the concept of operations? Comment below!</p>

<h4 id="tldr">TL;DR</h4>

<p>I don’t necessarily recommend you start your next greenfield project with all sorts of extra layers and throw away some of the quick, boilerplate free benefits Rails gives us. Applications made “the Rails Way” can and do work, but as your business layer gets more complex, Operations may provide a clean way to grow and organize your project.</p>

<ul>
<li>Separate interaction with your application and how data is stored. Interaction should focus on business operations, not how data is persisted.</li>
<li>Validate input without creating invalid objects. Rely on gems like <code>dry-validation</code> to eliminate inconsistency so you can trust your ActiveRecord models.</li>
<li>Almost all validation is contextual and business domain related. Remove it from your ActiveRecord models and validate it in the context you receive it.</li>
<li>Simplify and standardize your application’s interface by wrapping up business actions as functional Operations, creating a pseudo API/DSL of your business domain.</li>
<li>Have many small, decoupled objects to delegate to, but actions, whether its from an API call, Rails console, web form, background worker or a cron task, all start as an Operation.</li>
<li>Take a look at <a href="https://github.com/bryanrite/operational">Operational</a> or <a href="http://trailblazer.to/">Trailblazer</a> to help organize a complex business domain into a consistent and functional interface of immutable, stateless, and repeatable Operations.</li>
</ul>]]></content:encoded></item><item><title><![CDATA[Handling Phone Numbers: Best Practices]]></title><description><![CDATA[<p><em>Originally posted on the <a href="https://mojolingo.com/blog/2015/best-practices-handling-phone-numbers/">Mojo Lingo blog</a>.</em></p>

<p>When building real-time and telephony communication applications, you will inevitably need to store phone numbers. Whether it's input you get from Freeswitch, Asterisk, or via an API like Tropo or Twilio, phone numbers can be tricky to handle, parse, verify, store, and display</p>]]></description><link>http://bryanrite.com/handling-phone-numbers-best-practices-for-developers/</link><guid isPermaLink="false">816bd640-1325-4cfd-b021-ff0bdacb488b</guid><category><![CDATA[Best Practices]]></category><category><![CDATA[Telephony]]></category><category><![CDATA[WebRTC]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Mon, 31 Aug 2015 12:38:11 GMT</pubDate><content:encoded><![CDATA[<p><em>Originally posted on the <a href="https://mojolingo.com/blog/2015/best-practices-handling-phone-numbers/">Mojo Lingo blog</a>.</em></p>

<p>When building real-time and telephony communication applications, you will inevitably need to store phone numbers. Whether it's input you get from Freeswitch, Asterisk, or via an API like Tropo or Twilio, phone numbers can be tricky to handle, parse, verify, store, and display in your application.</p>

<iframe width="100%" height="100px" src="https://everlit.audio/embeds/artl_ozK3G5cwKBV?st=ads" title="Everlit Audio Player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

<h4 id="whyarephonenumberssohard">Why Are Phone Numbers So Hard?</h4>

<p>Phone numbers are very difficult to verify as their format can be dramatically different for various countries. Length, allowed starting numbers, reserved blocks, short codes and more make it very difficult to parse and verify a number is valid. When you receive a phone number input, does the number include the country code, an international dialing prefix, a national dialing prefix, an extension number, a special code like <code>411</code> or <code>911</code>, or a special carrier command like <code>*69</code> or <code>1157</code>.</p>

<p>Just displaying phone numbers from around the world can be tricky as the groupings of numbers is different, such as in the US: <code>(213) 555-1234</code> or the UK: <code>(0)20 1234 5678</code>. In addition, some countries have multiple formats! In Spain you can write a number like: <code>123 456 789</code> or <code>123 45 67 89</code>.</p>

<p>Even the "country code" is mislabeled as 20 countries around North America share the same country code (thanks to the <a href="https://en.wikipedia.org/wiki/North_American_Numbering_Plan">North American Numbering Plan</a>).</p>

<p>How are we supposed to handle, verify, query, and work with this diverse pool of numbers that conforms to very few rules?</p>

<h4 id="thee164standardformat">The E.164 Standard Format</h4>

<p>You've probably run into a similar issue when working with date and timezones in your career as a developer. Date input can be just as varied as the phone number system and timezones add a unique wrinkle when outputing and comparing dates. This has (arguably) been solved with a standardized format <a href="https://en.wikipedia.org/wiki/ISO_8601">ISO 8601</a>, which unambiguously organizes dates and times with all the necessary localization information in a easily human readable and machine parseable format.</p>

<p><a href="https://en.wikipedia.org/wiki/E.164">E.164</a> does that for phone numbers. It defines a simple format for unambiguously storing phone numbers in an easily readable string. The string starts with a <code>+</code> sign, followed by the country code and the "subscriber" number which is the phone number without any context prefixes such as local dialing codes, international dialing codes or formatting.</p>

<p>Numbers stored as E.164 can easily be parsed, formatted and displayed in the appropriate context... since the context of a phone number can greatly affect its format. So, with a UK number stored as <code>+442012345678</code> we can easily display it in the appropriate format for the various contexts:</p>

<ul>
<li><code>+44 20 1234 5678</code> - UK International format.</li>
<li><code>(0)20 1234 4567</code> - UK National format.</li>
<li><code>011 44 20 1234 5678</code> - Dialing from US to UK.</li>
<li><code>020 1234 5678</code> - Dialing locally within the UK.</li>
</ul>

<p>E.164 stores the important parts of the phone number that never change in an easily parseable string that allows us to then format the number depending on the context which we are displaying it.</p>

<p>Now that you want to store your numbers as E.164, how do you parse and format them in your application?</p>

<h4 id="googleslibphonenumber">Google's <code>libphonenumber</code></h4>

<p>There are lots of libraries out there that parse and format numbers into E.164, but I think that <a href="https://github.com/googlei18n/libphonenumber">Google's open source <code>libphonenumber</code></a> is the best. Google's experience with international number support on their Android platform exposes them to more complete and accurate list of phone numbers around the globe.</p>

<p>With <code>libphonenumber</code> you can parse, verify, and format phone number inputs quite easily, do as you type formatting and even glean extra information about the number, like whether it was a mobile or landline or what state or province it was from.</p>

<p><code>libphonenumber</code> in its basic form consists of a set of rules and regular expressions in an XML file for breaking down and parsing a number. Google provides a Java, Javascript, and C++ version of the lib, but people have <a href="https://github.com/googlei18n/libphonenumber#known-ports">ported it to other languages like Ruby, PHP, and Python</a>.</p>

<p>In addition, it can provide offline reverse geocoding and map numbers to specific carriers if the data is available.</p>

<h4 id="otherspecialcases">Other Special Cases</h4>

<p>E.164 describes a format for internationally routeable numbers. Numbers that are reachable from many countries. Some special numbers do not meet this criteria like nationally specific numbers such as <code>911</code>. Special numbers, <em>especially</em> emergency numbers like <code>911</code> or <code>112</code> require specific and often regulated handling, specific to your country. If you have to deal with these numbers, ensure you are meeting any required regulations and handle them as special cases. They are not formattable as E.164 numbers.</p>

<p>Extensions are another common piece of data when storing and collecting phone numbers. Think of extensions as extra information to send once you are connected. Extensions are not dialed when connecting to a phone number but are sent as extra instructions to the end system after you've connected to further direct your call... kind of like telephone NAT. They are not part of an E.164 number but are easily stored in a separate field and appended to any format for creating dialing strings or in the view.</p>

<h4 id="tldr">TL;DR</h4>

<ul>
<li>Parse and store all your phone numbers as E.164. It is easy to compare and unambiguous to parse.</li>
<li>Use a library like <code>libphonenumber</code> to parse and format a phone number for output.</li>
<li>Ensure you handle emergency numbers like <code>911</code> and <code>112</code> as special cases and make sure you are meeting any of your country's regulations.</li>
<li>Extensions are not a part of a phone number but something to send after you've connected. They should be input and stored separately.</li>
</ul>]]></content:encoded></item><item><title><![CDATA[Heroku, Puma, Redis, Sidekiq and Connection Limits]]></title><description><![CDATA[<p><em>Updated 2015-08-07: Sidekiq client processes can use connection pooling rather than requiring one connection per thread.</em></p>

<p><em>Updated 2015-08-08: Added an additional considerations section for other things to look out for.</em></p>

<p>When deploying a Rails app with Sidekiq to Heroku, it can be confusing to figure out how many of your</p>]]></description><link>http://bryanrite.com/heroku-puma-redis-sidekiq-and-connection-limits/</link><guid isPermaLink="false">49bea9dd-3735-4b10-a62e-c95d5bbadc4c</guid><category><![CDATA[Ruby on Rails]]></category><category><![CDATA[Heroku]]></category><category><![CDATA[Puma]]></category><category><![CDATA[Sidekiq]]></category><category><![CDATA[Redis]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Fri, 24 Jul 2015 17:59:17 GMT</pubDate><media:content url="https://unsplash.it/1400/600?image=304" medium="image"/><content:encoded><![CDATA[<img src="https://unsplash.it/1400/600?image=304" alt="Heroku, Puma, Redis, Sidekiq and Connection Limits"><p><em>Updated 2015-08-07: Sidekiq client processes can use connection pooling rather than requiring one connection per thread.</em></p>

<p><em>Updated 2015-08-08: Added an additional considerations section for other things to look out for.</em></p>

<p>When deploying a Rails app with Sidekiq to Heroku, it can be confusing to figure out how many of your limited connections to Redis and the database you have available when setting your connection pool sizes in your configs.</p>

<p>The free tiers of the various Redis providers on Heroku, which are generally large enough to handle a job queue at first, have limits of 10-30 connections, before you start getting connection refused errors. How do we calculate our optimal connection pool size with web dynos, clustered puma threads, and job/worker dynos?</p>

<h4 id="theclientmath">The Client Math</h4>

<del>Sidekiq clients (ie. your web processes), only need 1 connection per process. So you can set your client block to something like:</del>

<del>  

<pre><code class="language-ruby">Sidekiq.configure_client do |config|  
  config.redis = { size: 1, url: ENV["REDIS_URL"], namespace: "your-app" }
end  
</code></pre>


</del>

<p><strong>Updated:</strong></p>

<p>Sidekiq uses a connection pool by default per process, so Puma's threads can share a limited number of connections for each web dyno. </p>

<p>It's unlikely your app will need a connection per thread, since the client's only job is to push to Redis, which is very fast. We can then share a <em>reasonable</em> amount of connections between all the Puma threads to minimize blocking and idle connections.</p>

<p>What is <em>reasonable?</em> Depends on your application, but as a rule of thumb, try half the number of Puma threads per worker. <a href="https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#threads">Heroku recommends</a> a max thread count of <code>5</code> to work on their single dyno, so a client size of <code>2</code> or <code>3</code> will suffice per web dyno process.</p>

<pre><code class="language-ruby">Sidekiq.configure_client do |config|  
  config.redis = { size: 3, url: ENV["REDIS_URL"], namespace: "your-app" }
end  
</code></pre>

<p>So, how many connections will we be using:</p>

<pre><code>Puma Workers * (Puma Max Threads / 2) * Heroku Web Dynos  
</code></pre>

<p>Again, <a href="https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#workers">Heroku recommends 2 workers</a> per web dyno. If we have 2 web dynos, we'll have something like:</p>

<p><code>2 workers * 3 shared connections * 2 web dynos</code> = <code>12</code> connections to redis.</p>

<h4 id="theservermath">The Server Math</h4>

<p>The Heroku Redis Hobby-dev tier has a max connection limit of <code>20</code>, so we have <code>8</code> connections left for our server connection pool. The number of connections the server uses is:</p>

<pre><code>Heroku Job Dynos * Sidekiq Concurrency Count + 2 (reserved for internal Sidekiq stuff)  
</code></pre>

<p>We already know this has to come out to <code>8</code>, the number of connections we have left, so if we only have one worker dyno:</p>

<pre><code>1 * x + 2 = 8  
x = 6  
</code></pre>

<p>We can set the concurrency option in <code>sidekiq.yml</code> with a value of <code>6</code> and you don't have to worry about setting the <code>:size</code> parameter in the <code>Sidekiq.configure_server</code> block since Sidekiq will default to <code>concurrency + 2</code>.</p>

<p>Now you're taking advantage of all your allowed connections.</p>

<h4 id="otherconsiderations">Other Considerations</h4>

<p>Other processes such as the Heroku Scheduler or Rails Console that push to or pull from jobs on the Sidekiq queue will act as another client and initialize another set of connections. Even if the connections are only temporary, ensure you consider these in your client size total and treat them as an additional client "dyno".</p>

<p>Additional add-ons can also use connections of their own. For example, the RedisMonitor add-on uses 1 or 2 connections to monitor the Redis server. Be sure to take these into account as well.</p>

<h4 id="tldr">TL;DR</h4>

<pre><code>Client Size = Puma Workers * (Puma Threads / 2) * Heroku Web Dynos  
Server Size = (Redis Connection Limit - Client Size - 2) / Heroku Job Dynos  
</code></pre>

<p><small><em>Thanks to Mike Perham and Alex Ostleitner for some pointers.</em></small></p>]]></content:encoded></item><item><title><![CDATA[Encrypted Braintree Input Type with SimpleForm]]></title><description><![CDATA[Automatically add data-encrypted-name to your sensitive Braintree fields with this custom SimpleForm Input.]]></description><link>http://bryanrite.com/simple-form-braintree-input-type/</link><guid isPermaLink="false">e5cbf547-33cb-4950-af7a-79e6a74e29f9</guid><category><![CDATA[User Interface]]></category><category><![CDATA[Ruby on Rails]]></category><category><![CDATA[Braintree]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Mon, 13 Jul 2015 14:46:44 GMT</pubDate><media:content url="https://unsplash.it/1400/600?image=620&amp;gravity=south" medium="image"/><content:encoded><![CDATA[<img src="https://unsplash.it/1400/600?image=620&gravity=south" alt="Encrypted Braintree Input Type with SimpleForm"><p>When using Braintree and their Javascript integration library, you have to append a <code>data-encrypted-name</code> to the input field to allow the Braintree JS client to select, encrypt, and replace the value.</p>

<p>When working with Rails, the input name generally includes the form name and the attribute name such as: <code>add_credit_card[credit_card_number]</code>... so you might be doing:</p>

<pre><code class="language-ruby">f.input :credit_card_number, as: :string, data: { encrypted_name: 'add_credit_card[credit_card_number]' }  
</code></pre>

<p>but if you ever change the attribute name or the name of the form, or move it somewhere else, the values may change and you have to remember to update the <code>data-encrypted-name</code> manually. </p>

<p>If you're using SimpleForm 2+, I built a custom input type that will automatically append the <code>data-encrypted-name</code> for you... no more manually adding or editing it.</p>

<p>Simply add:</p>

<pre><code class="language-ruby"># app/inputs/braintree_encrypted_input.rb

class BraintreeEncryptedInput &lt; SimpleForm::Inputs::StringInput  
  def input(wrapper_options = nil)
    input_html_options[:'data-encrypted-name'] = "#{object_name}[#{attribute_name}]"
    super(wrapper_options)
  end
end  
</code></pre>

<p>to your project and use it in your forms:</p>

<pre><code class="language-ruby">f.input :credit_card_number, as: :braintree_encrypted  
</code></pre>]]></content:encoded></item><item><title><![CDATA[Ruby on Rails WebDAV Tutorial]]></title><description><![CDATA[<p>Ever wanted a Devise authenticated, per-user chroot'd, WebDAV implementation for your Ruby on Rails application? Well I created one for a client and wrote a tutorial about it on Github!  Check it out:</p>

<p><a href="https://github.com/chrisroberts/dav4rack/wiki/Advanced-Rails-3-Tutorial---Custom-Resource,-Devise,-and-User-Specific-Routing">Rails 3 WebDAV Tutorial with Custom Resources, Authentication with Devise, and User Specific Routing</a></p>

<p>The great gem</p>]]></description><link>http://bryanrite.com/rails-webdav-tutorial/</link><guid isPermaLink="false">aa9e0ad7-4741-4875-a00f-8df2738d3623</guid><category><![CDATA[Ruby on Rails]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Thu, 26 Jan 2012 23:29:00 GMT</pubDate><content:encoded><![CDATA[<p>Ever wanted a Devise authenticated, per-user chroot'd, WebDAV implementation for your Ruby on Rails application? Well I created one for a client and wrote a tutorial about it on Github!  Check it out:</p>

<p><a href="https://github.com/chrisroberts/dav4rack/wiki/Advanced-Rails-3-Tutorial---Custom-Resource,-Devise,-and-User-Specific-Routing">Rails 3 WebDAV Tutorial with Custom Resources, Authentication with Devise, and User Specific Routing</a></p>

<p>The great gem <a href="https://github.com/chrisroberts/dav4rack">DAV4Rack</a> and its creator Chris Roberts deserve a huge shout-out.</p>

<p><em>Note: The tutorial is part of a Wiki and is subject to change.</em></p>

<p><strong>Update:</strong> I built a sample app for this and it is available on Github: <a href="https://github.com/bryanrite/dav4rack-example-devise-subdirectories">github.com/bryanrite/dav4rack-example-devise-subdirectories</a></p>]]></content:encoded></item><item><title><![CDATA[Ruby on Rails CookieStore Security Concerns]]></title><description><![CDATA[<p>The CookieStore session storage in Ruby on Rails is not new; in fact, it has been the default session store since Rails 2.0. Since then, there have been countless blog posts and forum threads discussing various security concerns vs a server-sided store (ActiveRecordStore, Memcache, SqlStore, etc.). They all seem</p>]]></description><link>http://bryanrite.com/ruby-on-rails-cookiestore-security-concerns-lifetime-pass/</link><guid isPermaLink="false">ba7f1a66-6f55-4b26-94c2-4330e53e2be9</guid><category><![CDATA[Ruby on Rails]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Fri, 09 Sep 2011 17:39:00 GMT</pubDate><content:encoded><![CDATA[<p>The CookieStore session storage in Ruby on Rails is not new; in fact, it has been the default session store since Rails 2.0. Since then, there have been countless blog posts and forum threads discussing various security concerns vs a server-sided store (ActiveRecordStore, Memcache, SqlStore, etc.). They all seem to miss an important point: by default, a stolen cookie gives the thief a lifetime pass to a user account!</p>

<iframe width="100%" height="100px" src="https://everlit.audio/embeds/artl_jwQ0gGSAP6m" title="Everlit Audio Player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

<p>I will explain how this happens and some steps you can implement to mitigate it.</p>

<p>Most discussion seems to be focused very much on the obvious security difference: that data in the cookie is stored in plain text (well its stored in Base64, but that's trivially convertible).  Storing sensitive information or stateful data in a session, be it server side or cookie store, is bad practice and for most of us storing simple reference information, like the logged in user's id, this is not <em>usually</em> much of a concern.</p>

<p>Lets use the example: our cookie stores the logged in user's ID or nothing if the user isn't logged in. This is a common scheme used in many of the rails screencasts and guide books.</p>

<p>Instead of storing a <code>session_id</code> which identifies a server side storage record to get the user's ID, we store the user ID in the cookie. <em>Generally</em>  this user ID isn't sensitive, its probably in the user's profile URL or within the code somewhere.</p>

<p>If you're following the above rule and only storing simple reference information, your cookie isn't much different than a server-sided store.  The HMAC digest within the Rails CookieStore cookie prevents someone from tampering and changing the cookie, so we cannot change our cookie to a different user ID and be logged in as that user, same way it would be difficult to guess a different <code>session_id</code> and access someone else's session.</p>

<p>Since both implementations rely on a cookie being passed anyways, they have the same security concerns, and are equally susceptible to replay and fixation attacks.</p>

<h4 id="oksowhatisourconcernthen">Ok, so what is our concern then?</h4>

<p>There is one major difference that never seems to be brought up.  If (and when!) a CookieStore cookie is stolen:</p>

<p><em>By default, a CookieStore session will never become invalid.</em></p>

<p>By this, I mean if I steal an authenticated Cookie, I can use it to access the site as that user. This is a common attack called Session Hijacking or Sidejacking but with server-sided storage it can be mitigated.</p>

<p>Generally, with a server side store, you delete the <code>session_id</code> and accompanying data when a user logs out or times out.  Any stolen <code>session_ids</code> are no longer valid because that <code>session_id</code> no longer exists.  If the valid user never logs out and the attacker keeps sending requests, they can keep the session alive, but this can still be detected and stopped.</p>

<p>With the cookie based store, even if the valid user logs out and an expiry date is put on the cookie, an attacker can change the expiry date and replay the cookie at any time.  It will always be valid as there is no <code>session_id</code> to compare against and the cookie expiry is not guarded by the HMAC digest.</p>

<p>The Rails Guides suggest using <code>reset_session</code> to stop hijacking, but this does not help for CookieStore, there is no session identifier to reset!</p>

<h4 id="really">Really?</h4>

<p>Give it a try yourself!  On any of your CookieStore based rails app: </p>

<ul>
<li>Load up Firefox and install the <a href="https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/">Live HTTP Headers</a> or <a href="https://addons.mozilla.org/en-US/firefox/addon/tamper-data/">Tamper Data</a> plugin, i'll use Live HTTP Headers.</li>
<li>Log into your app.</li>
<li>Start Live HTTP Headers</li>
<li>Go to a page that shows you if you are logged in or not.</li>
<li>On the Live HTTP Headers modal, select the main request header, its usually the top most one.  Pretend you are an attacker and read this off a wireless network.</li>
<li>Log out of your web app.</li>
<li>Back on the Live HTTP Headers modal, you are now the attacker logging in with stolen cookie: replay the main request header.</li>
<li>You will now be logged into your app.  No matter what you do, the attacker can save that stolen cookie and replay it any time from anywhere and log back in.</li>
</ul>

<h4 id="howcanwestopit">How can we stop it?</h4>

<p>Well, stopping it is quite easy, and I'll explain a couple of ways how, but the real reason I bring this up is because it isn't obvious to people using the default session store there is a huge concern here.  Everyone talks about how never to store sensitive information in the cookie, but getting an authenticated cookie, by default, gives you <em>life-time pass</em> to that user's account. That seems much worse! Without a security measure, your application has a huge hole, and none of the rails tutorials, documentation, or screencasts seem to mention this.</p>

<p>No matter what authentication library you use: Devise, Authlogic, or if you roll your own, they are all susceptible because they all use whatever session store you decide.</p>

<p>The best and easiest solution is simply to use SSL.  Not just on your login forms and actions, but your <em>entire site</em>, or at least any pages where you have sessions turned on.  With SSL on, the user will not be able to replay your cookies and the entire attack vector is shut down. Rails 3.1 has a handy <code>force_ssl</code> switch you can use, and you can use something like:</p>

<pre><code class="language-ruby">:secure =&gt; Rails.env.production?
</code></pre>

<p>in your <code>config.session_store</code> declaration to ensure the cookie is only served over SSL.</p>

<p>If you don't or can't use SSL, try implementing a timeout and a <a href="http://en.wikipedia.org/wiki/Cryptographic_nonce">nonce</a> within the HMAC protected portion of the cookie.  </p>

<p>For example, managing a session timeout yourself by updating the expiry date on every request unless the expiry has passed creates a non-editable timeout on the session and will invalidate it after a specific time.  Of course the attacker can still touch your app to keep the timeout alive indefinitely, but it helps.</p>

<p>In addition to a timeout, adding a nonce, even something simple, can help invalidate existing cookies.  Storing a hash based on the user's last login and/or logout time can invalidate stolen cookies every time the valid user logs in/out.  This, coupled with the timeout can mitigate hijacking and puts it on par with server-based session management schemes.</p>

<p>Regardless of the session store mechanism you use, they're all susceptible to attack unless you're using SSL.  Unfortunately, by default, Rails' CookieStore gives you a no-fuss lifetime pass instead of a day-pass.</p>]]></content:encoded></item><item><title><![CDATA[Internationalization Strategies in ASP.Net]]></title><description><![CDATA[<p>Internationalization can be a tough area to do properly, in a scalable and manageable way. Most languages have their own system for handling different languages and cultures: Ruby uses its i10n library and YML files, PHP uses GNU gettext and PO files, while ASP.Net uses XML files presented by</p>]]></description><link>http://bryanrite.com/internationalization-strategies-in-asp-net-and-lessons-for-other-languages/</link><guid isPermaLink="false">7dccd826-ff5c-41c8-b107-95de4f0653e1</guid><category><![CDATA[C#]]></category><category><![CDATA[ASP.Net]]></category><category><![CDATA[User Interface]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Fri, 15 Apr 2011 11:12:00 GMT</pubDate><content:encoded><![CDATA[<p>Internationalization can be a tough area to do properly, in a scalable and manageable way. Most languages have their own system for handling different languages and cultures: Ruby uses its i10n library and YML files, PHP uses GNU gettext and PO files, while ASP.Net uses XML files presented by the IDE in Resource files.</p>

<p>I will explain some useful tips we learned while implementing Internationalization in one of my latest projects.</p>

<h4 id="understandingglobalizationandlocalization">Understanding Globalization and Localization</h4>

<p>Internationalization can be roughly broken down into two sections: Globalization and Localization, and understanding the difference between these is important.</p>

<p>When we talk about internationalization, people mainly think about multiple languages. For example, we want to offer our project in English and Chinese. What we’re really saying is we want to localize our project with two different languages. Essentially replacing the text depending on the user’s preference.</p>

<p>Now say that project is selling a product… are we going to change the currency symbols to match the language? Are all your prices going to be converted to Yuan for your Chinese speaking users? In some cases, yes; in a lot of cases, no, we still want to display currency and date formats in a single culture. This is the difference between Localization, language, and Globalization, culture.</p>

<p>Take for instance the Apple Store.  The Apple Store is Globalized for several different cultures.  When you go to the Chinese store, it will be selling its products to people in China, using the Chinese language and culture (zh-CN) is the right thing to do.  Now say they offered a version of its USA store in Mexican-Spanish.  Apple would likely not display currency in Pesos, it is still the USA store, they just want to make it more friendly to native Spanish speakers.  In this case we would use the US culture (en-US) and Mexican-Spanish language (es-MX): the text would be displayed in Spanish, but the currency symbols would continue to be dollars.</p>

<p>ASP.Net allows us to specify these settings separately:</p>

<pre><code class="language-cs">Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("en-US");  
Thread.CurrentThread.CurrentUICulture = CultureInfo.CreateSpecificCulture("es-MX");  
</code></pre>

<h4 id="whatssoimportantabouttheculture">What’s so important about the Culture</h4>

<p>In my experience, cultural settings are most important in two areas: currency and date format.</p>

<p>Currency can have different currency symbols ($, €, ¥), different thousands separators, different decimal separators.  Dates can be displayed in different formats (m/d/y, d/m/y).</p>

<p>Since these can differ from our localization settings, we need a way to dynamically show them based on the culture (not the language).  ASP.Net has the handy <code>.ToString()</code> method which can take standard identifiers like “C” or “D”.  ASP.Net will automatically output the values based on your current culture, but when you want to customize the output, check out the following libraries:</p>

<pre><code class="language-csharp">System.Globalization.CultureInfo.CurrentCulture.NumberFormat  
System.Globalization.CultureInfo.CurrentCulture.DateTimeFormat  
</code></pre>

<p>They will have everything you need, from the current cultures date format strings to the currency symbol.  Below are a couple of Extensions I commonly use to display data.  As you can see they take the format from the culture and the language from the ui culture.</p>

<pre><code class="language-cs">public static string ToLongDateNoWeekDayString(this DateTime date, bool abbrieviate)  
{
    string dateFormat = System.Text.RegularExpressions.Regex.Replace(System.Globalization.CultureInfo.CurrentCulture.DateTimeFormat.LongDatePattern, @",?\s*dddd?,?", "");

    if (abbrieviate)
        return date.ToString(System.Text.RegularExpressions.Regex.Replace(dateFormat, @"MMMM", "MMM"), System.Globalization.CultureInfo.CurrentUICulture);
    else
        return date.ToString(dateFormat, System.Globalization.CultureInfo.CurrentUICulture);
}

public static string ToLongDateTimeNoWeekDayString(this DateTime date, bool removeSeconds)  
{
    string pattern = (removeSeconds) ? @"(,?\s*dddd?,?)|(:ss)" : @",?\s*dddd?,?";
    return date.ToString(System.Text.RegularExpressions.Regex.Replace(System.Globalization.CultureInfo.CurrentCulture.DateTimeFormat.FullDateTimePattern, pattern, ""), System.Globalization.CultureInfo.CurrentUICulture);
}

public static string ToCulturalCurrencyWithoutSign(this decimal amount)  
{
    System.Globalization.NumberFormatInfo ni = (System.Globalization.NumberFormatInfo)System.Globalization.CultureInfo.CurrentCulture.NumberFormat.Clone();
    ni.CurrencySymbol = "";
    return amount.ToString("C", ni).Trim();
}
</code></pre>

<h4 id="regionaldialects">Regional Dialects</h4>

<p>With the culture, we always have to specify a region. For example, our site’s culture cannot be English (en), it has to be USA English (en-us), Canadian English (en-ca), UK English (en-gb), etc. Because all of these have slightly different settings, there is no “English” culture.</p>

<p>This is not true with language. We can create generalized resource files as well as specify regional or dialect specific files. ASP.Net will automatically select the best resource file to use. For example, we can create resource files for general english and override it with regional specific one. All we have to do is name the resource files properly:</p>

<pre><code>App_LocalResources  
- Example.aspx.en.resx
- Example.aspx.en-CA.resx
- Example.aspx.es.resx
- Example.aspx.es-DO.resx
- Example.aspx.es-MX.resx
</code></pre>

<p>Above we have 5 different resource files:</p>

<ul>
<li>en : General English.</li>
<li>en-CA : Canadian English (we have more u’s!)</li>
<li>es : Spanish</li>
<li>es-DO : Dominican Republic Spanish</li>
<li>es-MX : Mexican Spanish</li>
</ul>

<p>ASP.Net will select the most specific file based on your UI Culture.</p>

<h4 id="localizationfilesandvariables">Localization Files and Variables</h4>

<p>Another important feature of localization is the use of variables in our localization strings. Languages can be very complex and it is important to remember you cannot just break up a string around a variable.</p>

<p>For a very simple example, “You just added <em>War and Peace</em> to your Shopping Cart.”, is a common type of string you’ll need. The variable <em>War and Peace</em> can be replaced with any item you are adding to your shopping cart. You may be tempted to localize the string as:</p>

<ol>
<li>You just added  </li>
<li>to your Shopping Cart.</li>
</ol>

<p>And output it around a variable on your page. Since in different languages, the structure of the sentence can vary greatly, it is highly recommended to use String.Format and numbered variables. This allows the localized versions of this sentence all the flexibility they need. You would want to localize the string like: <code>You just added {0} to your Shopping Cart.</code></p>

<p>And display it with:</p>

<pre><code class="language-cs">String.Format(GetLocalResourceObject("example").ToString(), item.Name)  
</code></pre>

<p>The same idea goes for logical paragraphs or basic formatting.  Instead of breaking a paragraph up into multiple separate sentences, always try to keep logically grouped text together as much as possible, and I personally have no problems allowing basic HTML formatting in localized strings.  This allows for smooth translations, as the more the text is broken up, the choppier the translations will be.</p>

<h4 id="plurals">Plurals</h4>

<p>The use of plurals is an important and difficult syntax to master.  In English, and some other languages, we only worry about 2 cases, maybe 3:</p>

<ul>
<li>You have <strong>multiple</strong> items.</li>
<li>You have <strong>a single</strong> item.</li>
<li>You have <strong>zero</strong> items.  (usually the same as multiple, but in some cases it can be awkward)</li>
</ul>

<p>This can be managed by storing the 2-3 different strings in our resource files and extending GetLocalResourceObject to include an identifier for which string to use based on the count.</p>

<p>In many languages, it can be more complex then that.  In Polish for example, the grammar for 1, 2-4, and 0 or 4+ is different, and changes again after 20.  GNU’s gettext has a <a href="http://www.gnu.org/s/hello/manual/gettext/Plural-forms.html">pretty good solution</a> to this dynamic type of pluralization, but ASP.Net does not, you will have to come up with your own solution if that need arises in your project.</p>]]></content:encoded></item><item><title><![CDATA[Remote Incremental Backups via Rsync and SSH to a Drobo]]></title><description><![CDATA[<p>A client had a catastrophic fileserver failure resulting in the loss of a significant amount of important data.  As a result,  I was called in to setup an automated offsite backup.  Due to many factors, I decided to implement a DroboFS instead of a hosted cloud or regular *nix fileserver.</p>]]></description><link>http://bryanrite.com/drobo-incremental-rsync-backups/</link><guid isPermaLink="false">5dcab9e9-3e95-45f6-9588-4fb44d29997f</guid><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Wed, 30 Mar 2011 18:25:00 GMT</pubDate><content:encoded><![CDATA[<p>A client had a catastrophic fileserver failure resulting in the loss of a significant amount of important data.  As a result,  I was called in to setup an automated offsite backup.  Due to many factors, I decided to implement a DroboFS instead of a hosted cloud or regular *nix fileserver.</p>

<p>Using the Drobo, there were a couple of gotchas.  It doesn't have sshd enabled, no bash shell, no rsync (or rsnapshot which would make this easier) and more.</p>

<p>I will describe how I setup an automated, password-less, incremental differential, hard-linked backup using rsync across ssh.</p>

<h2 id="whatsanincrementaldifferentialhardlinkedbackup">Whats an incremental differential hard-linked backup?</h2>

<p>It is essentially the same as Apple's Time Machine system.  We create a backup using only the files and directories that have changed, but still store it as a full backup on the backup media.  In other words, every time we backup, we will only transfer what has changed, but if you look at that backup on the remote computer, all the files will be there.</p>

<p>How do we do this?  Utilizing hard-linking in the filesystem.  Essentially we are creating two "pointers" that reference the same file (inode).  That way each backup folder has its own copy of the file, but we only store the file once.  If you run the command <code>stat</code> on a file or directory, you can see in the output how many links to that inode there are, or run <code>ls -i</code> to see the inode you are pointing to.  New versions of that file won't overwrite the old ones, the new backups will reference a different inode, keeping your historical data.</p>

<p>This helps for several reasons.  When we need to restore the entire filesystem, we can just move all the files from one directory, rather than merging the incrementals into a delta and applying it to the full backup to get our latest system and we only transfer what has changed which dramatically reduces bandwidth, time, and space.</p>

<h2 id="letsgetstarted">Let's get started...</h2>

<p>First things first, setup your Drobo as the instructions specify and be sure to enable <a href="http://support.datarobotics.com/ci/fattach/get/25295/1286306491/redirect/1/session/L2F2LzEvc2lkL0lReUZRZXFr">DroboApps</a>.  Create a share on the Drobo for your backups.  I'll name mine <code>backup</code>, if you use a different name, edit the below scripts as necessary.</p>

<p>You will then need to install the <a href="http://www.drobo.com/droboapps/apps-for-drobofs.php#52">Dropbear SSH</a> and <a href="http://www.drobo.com/droboapps/apps-for-drobofs.php#58">Rsync</a> apps for the following to work.  [<em>Note</em>: You do not need the Apache or Admin apps, but you might find them useful.]</p>

<p>At this point you should be able to login via SSH to your Drobo using the default SSH credentials, at the time of writing for the DroboFS its root and root.  We'll update the root SSH password using the following command:</p>

<pre><code class="language-bash">/mnt/DroboFS/Shares/DroboApps/dropbear/root_passwd
</code></pre>

<p>This will reset and persist our root password between reboots.</p>

<p>[<em>Note</em>: I know that allowing root SSH access is not ideal but I have so far been unable to stop the drobo from allowing it if SSH is enabled... perhaps a commenter will have a solution?]</p>

<p>Now we need to edit the rsync config file:  </p>

<pre><code class="language-bash">vi /mnt/DroboFS/Shares/DroboApps/rsync/rsyncd.conf  
</code></pre>

<p>You should see one share named <code>[drobofs]</code>.  We want to replace it so that the file looks like:  </p>

<pre><code>uid = root  
gid = root  
pid file = /mnt/DroboFS/Shares/DroboApps/rsync/rsyncd.pid

[drobofs]
        path = /mnt/DroboFS/Shares/backup
        comment = Backup Share
        read only = false
</code></pre>

<p>[<em>Note:</em> if you named your backup share different, you'll need to specify it here.]</p>

<p>Alright, now to provide password-less login.  From the machine that has the data you want backed up, generate your SSH keys.  You will likely not want to use a passphrase and I'll leave you to secure SSH as you see fit (perhaps read up about <em>forced-commands-only</em>).</p>

<pre><code>user@server:~&gt; ssh-keygen -t rsa  
Generating public/private rsa key pair.  
Enter file in which to save the key (/home/user/.ssh/id_rsa):  
Enter passphrase (empty for no passphrase):  
Enter same passphrase again:  
Your identification has been saved in /home/user/.ssh/id_rsa.  
Your public key has been saved in /home/user/.ssh/id_rsa.pub.  
The key fingerprint is:  
f2:13:b7:23:75:da:4e:35:a8:32:61:af:43:e1:a0:53 user@server  
</code></pre>

<p>Copy what is contained in <code>id_rsa.pub</code>.  Now, on the Drobo we have to create the authorized_keys file to enable us to login with the key we just created.  The required directory and files do not exist by default so, from the Drobo:  </p>

<pre><code class="language-bash">mkdir ~/.ssh  
vi ~/.ssh/authorized_keys  
</code></pre>

<p>Now copy the contents of <code>id_rsa.pub</code> into the authorized_keys file.  You should now be able to SSH from the server to the Drobo without needing to enter a password.</p>

<p>Now that our apps are setup on the Drobo and communications will be seamless, we're going to create two scripts, one to rotate and manage the backups on the Drobo and one to do the actual rsync-ing.</p>

<p>The first script will reside on the Drobo in the root of the backup folder.  In our example:</p>

<pre><code class="language-bash">/mnt/DroboFS/Shares/backup
</code></pre>

<p>I'll call it <code>rotate_backups.sh</code> and it'll look like the following.  Edit as necessary for your own purposes, and please note, the Drobo doesn't have the bash shell, so we're using good old <code>#!/bin/sh</code>  </p>

<pre><code class="language-bash">#!/bin/sh

# How many backups would you like to keep, each time you run
# the backup script, a new one will be created, so if you want:
# Daily for a week, script goes cron daily and enter 7.
# Hourly for 3 days, script goes cron hourly and enter 72 (24 hours x 3 days)
NUMOFBACKUPS=7

# Where are we backing up to?
BACKUPLOC=/mnt/DroboFS/Shares/backup

# Delete the oldest backup
NUMOFBACKUPS=`expr $NUMOFBACKUPS - 1`  
if [ -d $BACKUPLOC/backup.$NUMOFBACKUPS ]; then  
        echo "delete backup.$NUMOFBACKUPS"
        rm -Rf $BACKUPLOC/backup.$NUMOFBACKUPS
fi

# Move each snapshot
while [ $NUMOFBACKUPS -gt 0 ]  
do  
        NUMOFBACKUPS=`expr $NUMOFBACKUPS - 1`
        if [ -d $BACKUPLOC/backup.$NUMOFBACKUPS ] ; then
                NEW=`expr $NUMOFBACKUPS + 1`
                mv $BACKUPLOC/backup.$NUMOFBACKUPS $BACKUPLOC/backup.$NEW
                echo "Move backup.$NUMOFBACKUPS to backup.$NEW"
        fi
done  
</code></pre>

<p>This script will delete your oldest backup and move the others down the line... so your latest backup will always be backup.0, 1 run old is backup.1, 2 runs old is backup.2, etc.</p>

<p>Now that our backups are managed properly we will create the actual rsync script.  This will be run on the computer you want to back up from, and I suggest you add it to the crontab so it runs as often as you want it to.  Edit BDIR, BSERVER, REMOVEDIR, and EXCLUDES as necessary.</p>

<pre><code class="language-bash">#!/bin/sh

# Directory to backup.
BDIR=/data_to_backup

# Remote server (should match the password-less SSH credentials
# we setup earlier).
BSERVER=root@drobo

# Full path of the directory on the Drobo to backup to.
REMOTEDIR=/mnt/DroboFS/Shares/backup

# A list of files or directories to exclude.
EXCLUDES=/home/user/backup/excludes.txt

# Activate our rotate_backups script on the drobo.
ssh $BSERVER $REMOTEDIR/rotate_backups.sh

OPTS="--force --ignore-errors --delete-excluded --exclude-from=$EXCLUDES --delete -av --rsync-path=/mnt/DroboFS/Shares/DroboApps/rsync/rsync"

# Do the rsync.
rsync $OPTS -e 'ssh -p 22' --link-dest=../backup.1 $BDIR $BSERVER:$REMOTEDIR/backup.0/  
</code></pre>

<p>Add that script to your hourly/daily/whatever you want crontab and you should be good.  The first time you run it you'll be transferring the entire backup directory so it might take a long time and you'll have an error message about <code>--link-dest backup.1</code> not existing, but you can ignore that.  Subsequent backups should run without a hitch.</p>]]></content:encoded></item><item><title><![CDATA[Repairing a Faulty Disk in a Software RAID Array]]></title><description><![CDATA[<p>I was doing some system maintenance today and came across the following horrific screen:</p>

<pre><code>/dev/md0:
        Version : 00.90.03
  Creation Time : Sun Nov 16 14:13:20 2007
     Raid Level : raid5
     Array Size : 732587712 (698.65 GiB 750.17 GB)
  Used Dev Size : 244195904 (232.88 GiB 250.06</code></pre>]]></description><link>http://bryanrite.com/repairing-a-faulty-disk-in-a-software-raid-array/</link><guid isPermaLink="false">f687690f-9ea6-433c-8083-1fa97d4dabbe</guid><category><![CDATA[Ubuntu]]></category><category><![CDATA[RAID]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Wed, 31 Dec 2008 08:55:00 GMT</pubDate><content:encoded><![CDATA[<p>I was doing some system maintenance today and came across the following horrific screen:</p>

<pre><code>/dev/md0:
        Version : 00.90.03
  Creation Time : Sun Nov 16 14:13:20 2007
     Raid Level : raid5
     Array Size : 732587712 (698.65 GiB 750.17 GB)
  Used Dev Size : 244195904 (232.88 GiB 250.06 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0  
    Persistence : Superblock is persistent

    Update Time : Wed Dec 31 10:41:15 2008
          State : clean, degraded
 Active Devices : 3
Working Devices : 3  
 Failed Devices : 1
  Spare Devices : 0

...

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8        1        -      faulty spare
</code></pre>

<p>One of the drives in my fileserver had died!  Time to back up and get that sucker running again.</p>

<p><em>Please note:</em> The following is only a guide to help you replace a failed disc.  I cannot guarantee this will work for you, but it is what I do and has worked every time without any data loss.</p>

<p>As you can see, it is a 4 disc software RAID 5 array with no hot-swap spares.  The following should work for most single disc failure situations in RAID 1, 5, or 6.</p>

<p>It appears that <code>sda</code> has bailed on me.  First things first, <strong>backup the machine</strong>.  If anything happens, you can rebuild from scratch.</p>

<p>You can see the faulty disc has already been removed from the array, but if yours hasn't been removed yet, the commands:  </p>

<pre><code class="language-bash">mdadm --manage /dev/md0 -f /dev/sda1  
mdadm --manage /dev/md0 -r /dev/sda1  
</code></pre>

<p>will mark it as failed (so it can be removed) and remove the <code>sda1</code> partition.</p>

<p>Shutdown the machine and switch out the hard drives, make sure you only replace the faulty drive, don't mess up the order of the drives cause it'll be a pain to get it back in.</p>

<p>Boot up the machine.  Your RAID array will be in the same degraded state.  We need to partition the new drive exactly the same way we partitioned the drives in the existing array.  Luckily this is a one-liner with <code>sfdisk</code>:</p>

<pre><code class="language-bash">sfdisk -d /dev/sdb | sfdisk /dev/sda  
</code></pre>

<p>The above code will dump the partition table of <code>sdb</code> (or use any of the functioning drives) and pipe it to <code>sfdisk</code> to partition <code>sda</code> the same way.  It should only take a second.</p>

<p>Then we can simply add the new drive to the array:  </p>

<pre><code class="language-bash">mdadm --manage /dev/md0 -a /dev/sda1  
</code></pre>

<p>If you take a look at <code>cat /proc/mdstat</code> or <code>mdadm --detail /dev/md0</code> you should see that the array is recovering (with a percentage done).</p>

<p>After the recovery is done, you'll be back to new and clean!</p>

<p>Good luck!</p>]]></content:encoded></item><item><title><![CDATA[Update Panel, GridView, and Non-asynchronous Postbacks in ASP.Net 2.0]]></title><description><![CDATA[Working with a GridView in an Update Panel is simple but when you require a full postback instead of an async one you can run into some trouble.]]></description><link>http://bryanrite.com/update-panel-gridview-and-non-asynchronous-postbacks-in-aspnet-20/</link><guid isPermaLink="false">c1efe669-bd46-4a00-84da-51e74b22b31c</guid><category><![CDATA[C#]]></category><category><![CDATA[ASP.Net]]></category><category><![CDATA[AJAX]]></category><category><![CDATA[User Interface]]></category><dc:creator><![CDATA[Bryan Rite]]></dc:creator><pubDate>Wed, 14 Nov 2007 15:22:00 GMT</pubDate><content:encoded><![CDATA[<p>In my latest job, I've been doing a lot of work with Microsoft and ASP.Net.  I've recently started using the AJAX library for ASP.Net 2.0 and, while it has gone surprising well for the most part, there was one problem in general I couldn't find an answer for online so I thought I'd post my solution.</p>

<p>Working with a GridView in an Update Panel is pretty quick and simple for many reasons, but when you require a full postback instead of an asynchronous one you can run into some trouble.  If the actual object to cause the full postback has a set <code>ID</code>, no problem simply add:</p>

<pre><code class="language-csharp">&lt;asp:UpdatePanel ...&gt;  
   &lt;ContentTemplate&gt;...&lt;/ContentTemplate&gt;
   &lt;Triggers&gt;
      &lt;asp:PostBackTrigger ControlID="CONTROL_ID" /&gt;
   &lt;/Triggers&gt;
&lt;/asp:UpdatePanel&gt;  
</code></pre>

<p>The PostBackTrigger defined there will issue a full page postback.  The problem arises on run-time controls such as a <code>TemplateField</code> or <code>ButtonField</code> within the GridView.  There is no (easy) way to assign them as PostBackTrigger as either you don't know the ID (in a <code>ButtonField</code> example) or the ID changes based on any MasterPages or panels.  The solution is simple though.</p>

<p>Basically all we have to do is add the control to the Trigger collection in the code behind on the <code>DataBind</code> event of the item.  This will work with any type of control you want to put in your GridView.  In this example, I'll use a <code>LinkButton</code> in a <code>TemplateField</code>.</p>

<pre><code class="language-csharp">... UpdatePanel definition etc. ...
&lt;asp:GridView ...&gt;  
   &lt;Columns&gt;
      &lt;asp:TemplateField&gt;
         &lt;ItemTemplate&gt;
            &lt;asp:LinkButton CommandName="Select" ID="PostBackButton" runat="Server" Text="Do PostBack" OnDataBinding="PostBackBind_DataBinding"&gt;
            &lt;/asp:LinkButton&gt;
         &lt;/ItemTemplate&gt;
     &lt;/asp:TemplateField&gt;
     ... Any other Columns ...
   &lt;/Columns&gt;
&lt;/asp:GridView&gt;  
</code></pre>

<p>Then in the codebehind:</p>

<pre><code class="language-csharp">protected void PostBackBind_DataBinding(object sender, EventArgs e)  
{
   LinkButton lb = (LinkButton) sender;
   ScriptManager sm = (ScriptManager)Page.Master.FindControl("SM_ID");
   sm.RegisterPostBackControl(lb);
}
</code></pre>

<p>The script manager has a handy <code>RegisterPostBackControl</code> method and the link buttons are dynamically set as full postback calls.</p>

<p><em>Note:</em> If you aren't using a Master Page, you can just get the scriptmanager via its local reference: <code>this.SM_ID.RegisterPostBackControl(lb);</code></p>

<p>Hope this helps someone out!</p>]]></content:encoded></item></channel></rss>