Installing Laravel Spark Manually with Composer – 2021

For whatever reason, you may need to install the new Laravel Spark into your project without following their installation instructions. Sometimes you don’t control the server environment as much or devops, etc so it’s just easier to include the files in your project. Here’s how to tell Composer where the files are and how to load Spark.

First – Update composer.json

Similarly to the official install docs, you need to add a snippet to your composer.json file.

"repositories": [
   {
     "type": "path",
     "url": "./spark-stripe"
   }
],

In my case, I have a folder in the root of my project called spark-stripe in which I placed all of the package files. Does it matter what the name is? I don’t know but since that is the package name, it made the most sense to me.

Final – Install the Package

Lastly, you’ll need to install the package like you normally would. For what it’s worth, I’m using Composer v2.

composer require laravel/spark-stripe 

If you don’t see a new list of dependency packages being installed, you likely did something wrong. Go back and check your composer.json file for spelling errors or other mistakes.

Hope this helps you out!

How to add a Macro to the Laravel HTTP client facade

While working on Let Them Eat 🍰I came across some peculiar behavior. For some of the Slack web API’s, there is a requirement to send the requests as a URL encoded form object. Since all of their API endpoints support this method of access, I created a little helper on my user object to get an instance of the Laravel HTTP client, with the asForm() method already applied.

This was working great until today when I wanted to add support for blocks to one of my bot responses. While I could support sending that field as a JSON string, it felt better to change the request to be sent fully as JSON instead. I thought, that would be as simple as calling ->withHeaders() again, but unfortunately, the deep-recursive-merge used by the HTTP facade doesn’t clear out any existing values.

Http::asForm()->asJson()

"headers" => [
  "Content-Type" => [
    0 => "application/x-www-form-urlencoded",
    1 => "application/json"
  ]
]

Obviously, the best option would be to not call the facade with multiple methods that change the same object, but in this case I really wanted to just overwrite it for this one instance.

Enter, Macros!

There are lots of places online to read about Laravel Macros so I won’t go into it here too deeply, but the gist of it, is you can add custom methods to core objects without extending them and creating new classes. This can be super helpful when you just want to add a little helper method but don’t want to go through the process of extending the class the old-fashion way. This can be especially useful to access protected properties or private methods.

I knew all facades in Laravel are macroable so I jumped into my AppServiceProvider boot method and added a lil somthin somethin.

// within AppServiceProvider::boot()
PendingRequest::macro('clearContentType', function () {
  $this->options['headers']['Content-Type'] = [];
  return $this;
});

Originally, I tried to add the Macro directly to the Http facade, but that only ended up working if I called the new method statically. To have it work in the manner I expected, I had to add the macro to Illuminate\Http\Client\PendingRequest, which is the class that is bound to the Http facade under the hood.

Using my new macro, I can easily clear out any content type headers before making a request, no matter how many times I call methods that set the content type header.

Http::asForm()->asJson()->asForm()->asJson()->asForm()->clearContentType()->asJson()

"headers" => [
  "Content-Type" => [
    0 => "application/json"
  ]
]

Now, I suspect this is a bug, but I’m not sure. I’ll be opening an issue on Github and we shall see. In the meantime, the macro will have to do! ūüôā

Migrating Data and Merging Models in Laravel

On one of my side projects, Let Them Eat 🍰 , I recently needed to do some migrations to combine / merge two models. I originally had optimized a bit too much, and later realized things would be a lot simpler if I only had one model.

There wasn’t a ton of info about this online, so I’m going to do my best to try and explain what I did. For this migration, I was using Laravel 8.

So turns out, merging models is kind of a pain! In my case, I was merging a SlackUser::class model with the default User::class model that ships with Laravel. From a data perspective, it wasn’t too bad. I needed to add a few columns to the User table that were previously on the SlackUser table. The issues arose when I realized there were a lot of places in my code that were reliant on accessing each model off of a relationship of the other. So lots of calls to $user->slackUser and $slackUser->user intermingled across the app, dependent on what I was doing at the time.

Since my ultimate goal was to completely delete any reference to a SlackUser, I had to take a careful approach when modifying the database.

First, I added the extra columns I needed to the users table.

Schema::table('users', function (Blueprint $table) {
  $table->string('slack_user_id')->after('id')->nullable();
  $table->foreignId('team_id')->after('slack_user_id')->nullable()->constrained()->onDelete('cascade');
  $table->boolean('is_owner')->after('team_id')->default(false);
  $table->string('avatar_url')->after('email')->nullable();
  $table->string('timezone')->after('avatar_url')->nullable();
  $table->boolean('is_onboarded')->after('timezone')->default(false);
  $table->softDeletes();
});

I continued in much the same process for other tables that needed to be modified.

After the tables were modified, I queried all of the SlackUsers and looped over them to create new User models if they didn’t have one already. In my app, a User model was only created if the person logged into the webapp, otherwise they would happily live on as only a SlackUser. Now, everyone gets a user model, and I don’t really care if they log in or not!

Some advice on the Shifty Coders Slack recommended not to rely on Eloquent here. This makes sense as in a future release, I’ll be completely removing the SlackUser model, so if I relied on Eloquent for the migration, it would throw an error if I ever deleted that class. Here’s what the migration looked like:

DB::table('slack_users')->get()->map(
	function ($slackUser) {
		$userId = $slackUser->user_id;
		$slackUser = Arr::except((array) $slackUser, ['id', 'created_at', 'updated_at', 'user_id']);
		$user = User::findOrNew($userId);
		if (!$user->exists) {
			$user->name = $slackUser['slack_user_id'];
			$user->password = Hash::make(random_bytes(20));
		}
		$user->fill($slackUser);
		$user->save();
	}
);

This is all pretty straightforward. For each SlackUser, check if they have a User model already, and if not, create a new User and give them a random password. My app doesn’t actually use passwords for authentication, instead relying solely on Slack Oauth, so the password field is irrelevant. In the future I may want to allow for other auth methods, so I left it for the sake of simplicity.

After all of the users were migrated, I could then go about the business of updating other models that used the SlackUser as a foreign key. In my case, I had messed up in the original migrations and not enforced those foreign keys, but if they are enforced in your app, you’ll need to drop the foreign key before you go about migrating all of this data around.

$table->dropForeign(['slack_user_id']);

Here is what the migration to change the foreign keys on my Cake model looked like.

DB::table('cakes')->get()->map(
	function ($cake) {
		$giver = DB::table('slack_users')->select('slack_user_id')->where('id', '=', $cake->giver_id)->get()->first();
		$giver = User::where('slack_user_id', '=', $giver->slack_user_id)->withTrashed()->first();
		$target = DB::table('slack_users')->select('slack_user_id')->where('id', '=', $cake->target_id)->get()->first();
		$target = User::where('slack_user_id', '=', $target->slack_user_id)->withTrashed()->first();
		DB::table('cakes')->where('id', $cake->id)->update(['giver_id' => $giver->id, 'target_id' => $target->id]);
	}
);

Again, notice that I’m not using Eloquent to access the SlackUser model, instead relying on the DB:class facade. I’m free to delete the SlackUser::class at anytime now!

All of this code was added to the up() method of my migration, and I carefully reversed all of the column changes for the down() method. One thing I did not do in the down method, was remigrate any data. I figured if the deploy went so bad that I needed to do that, then I would be better off restoring the database entirely from a backup instead. The down() changes I made were purely so I could migrate up/down for tests, which weren’t reliant on any database values anyway.

That brings me to another pain point: tests! 90% of my tests had to be updated since they mostly relied on the SlackUser. I started with one test file at a time, running the entire file first, then each failing test. I generally changed any instance of SlackUser to User first and then saw what broke. The first few were painful as there were references to relationships that needed to be updated. Often times while doing this, I would catch something that needed to be updated in my migration as well.

Eventually, most of the methods from SlackUser were migrated to User and all of the relationships were updated. Views were the last thing to be checked and I even managed to add a few missing tests based on failures I found while manually browsing the site locally.

// One of my newly added tests!
/** @test */
public function a_user_can_view_the_perk_redemption_page() //phpcs:ignore
{
	$this->withoutExceptionHandling();
	$user = factory(User::class)->create([
	  'is_owner' => true
	]);

	// Make fake users for the manager selection.
	factory(User::class, 5)->create([
	  'team_id' => $user->team->id
	]);

	$this->actingAs($user);

	$user->team->perks()->create([
	  'title' => 'an image perk',
	  'cost' => 100,
	  'image_url' => 'https://perk-image.com/perk.jpg'
	]);

	$this->assertCount(1, Perk::all());

	$res = $this->get(route('redeem-perk-create', Perk::first()));
	$res->assertOk();

	// Check for all of the names on the page. (manager selection for perk redemption)
	$res->assertSee([...User::all()->map->name]);
}

In the end, the Github PR had 57 changed files! A huge undertaking by any standard. I would venture a guess that those 57 files represent 80-90% of all of the code I had written for the app.

Overall, I’m happy I did this, as the logic surrounding users is much simpler to understand. I also got the opportunity to try out some new things and learn a bit more about the built in DB facade. I’m still kinda intimidated by SQL in general, but I’m getting more courageous every time I tackle one of these projects. Backups are still really important though! 😉

Conclusion

If you have any questions, feel free to reach out to me on Twitter or leave a comment!

How to use Tailwind CSS v1.0 with Laravel Mix

Now that Tailwind CSS is approaching version 1.0, I wanted to go ahead and start using it on some projects that will be launching in the next few months. It seems the API is stable, so now seems like as good a time as any to document how to get Tailwind up and running with a new Laravel project.

Install Dependencies

All you need to do is run a few simple commands.

Install Tailwind 1.0 Beta

yarn add -D tailwindcss@next

Install Laravel Mix Tailwind

yarn add -D laravel-mix-tailwind

Generate Tailwind config file

yarn tailwind init

This is where all of your modifications to Tailwind will live. Check the official documentation for specifics on which keys to add/override depending on your needs.

Replace webpack.mix.js

In your webpack.mix.js file, replace it with this snippet to mimic the default behavior in a new Laravel project.

const mix = require('laravel-mix');
require('laravel-mix-tailwind');

/*
|--------------------------------------------------------------------------
| Mix Asset Management
|--------------------------------------------------------------------------
|
| Mix provides a clean, fluent API for defining some Webpack build steps
| for your Laravel application. By default, we are compiling the Sass
| file for the application as well as bundling up all the JS files.
|
*/

mix.js('resources/js/app.js', 'public/js')
    .sass('resources/sass/app.scss', 'public/css')
    .options({
        postCss: [require('tailwindcss')]
    });

The above snippet will compile Tailwind using the standard app.scss as the base. Be sure to add your Tailwind directives to the top of that file so the utility classes are injected. You can also still use SASS if that’s something you want to take advantage of inside of your custom CSS components.

Running Mix is the same as always.

yarn run watch
// OR
yarn run dev

Updates

If something changes when Tailwind 1.0 is officially released, I’ll try to update this article accordingly. Leave a comment or reach out on Twitter if something needs to be updated or modified.

Converting a WordPress Plugin Store to HTTPS

A few weeks ago, I was experimenting with https:// on the site for my plugin, ACF Widgets. I added the SSL cert for my domain using Let’s Encrypt and the requests were being handled fine, but I was only enforcing SSL on the checkout page.¬†If you hit the ACF Widgets site with https://¬†initially, everything worked fine, but my login rules were causing some issues with people logging in to the support forums.

After 4 or 5 people contacted me through other channels telling me they couldn’t log in to get support, I decided to look into it a little more closely. As it turns out, the login pages were trying to work over SSL while people were trying to login with insecure http://¬†POST requests. Whoops!

To fix¬†this problem, I needed to convert the whole site to use https:// all of the time. This too worked out fine and wasn’t an issue. 3 lines in .htaccess¬†that you can find on a million different blogs. Super easy stuff. What made things more difficult, was upgrading the URL for my plugin to check and receive automatic updates.

The problem with copying .htaccess rules

I think this goes back to a fundamental flaw with humans, in that we want results and we want them immediately. In 99% of the tutorials I found, most of them advocated for an .htaccess rule that looks something like this:

RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://yourdomain.com/$1 [R,L]

view raw
.htaccess-redirect
hosted with ❤ by GitHub

Generally, this will work, and it’s fine. Let’s go through it line by line.

  1. Turn the Rewrite Engine on. This tells apache that we’re going to do a rewrite so it can load the required modules.
  2. Set a condition. In this case, return true if the server is listening on port 80. (The standard port for HTTP)
  3. Redirect the request to the given URL and pass in the URL query parameters.

Like I said, for most sites this works fine. However, when I pushed the next update to my plugin, I noticed I wasn’t getting any update notices. Why wasn’t it working?!

Now, in my most recent plugin update, I updated the store URL to my new https:// domain, so after everyone updated, I know whatever weird issue going on would probably go away. I did some tests and sure enough, requesting the update using the https://¬†store URL triggered an update. So why wasn’t it working with the http://¬†URL? What was happening during the redirect that broke things?

Since I have almost 500 customers, sending an email asking everyone to manually re-install this new version of my plugin was unacceptable. I knew I could do better. I decided to dig-in to the inner workings of APACHE and HTTP a little bit to understand what was going on behind the scenes.

A brief history of redirects in HTTP

So from my understanding of reading numerous blogs, is that the original intent of the HTTP/1.1 spec,¬†was that if no redirect status code¬†was specified (ie. 301, 302, etc) that the client was supposed to treat it as the same method as the original request. So if I send a POST¬†request to /my-api-endpoint/¬†, the client should honor that POST¬†request and it’s data if I do not specify a status code. In APACHE’s mod_rewrite¬†this looks like:

RewriteRule ^(.*)$ https://example.com/$1 [R]

Using the [R] flag with no options.

Somewhere along the way (I’ve seen both IE and Netscape blamed) browsers and HTTP clients in popular languages begin to interpret any missing status code as a 302. In the HTTP spec, a 302 request is for GET¬†requests only. Herein lies the problem with our updates.

EDD Software Licensing Update Process

To understand why updates are failing, I also needed to examine the source code for the update script I was using. At the time of this writing, ACF Widgets is build upon Easy Digital Downloads and the Software Licensing Add-On. It works great! Though in my case, updating the store to https:// was obviously causing issues. Digging in to the code, we find this:

<?php
$request = wp_remote_post( $this->api_url, array( 'timeout' => 15, 'sslverify' => false, 'body' => $api_params ) );

And herein lies our problem. EDD SL uses wp_remote_post()¬†to send API requests to the URL¬†of our plugin store, which is fine, nothing wrong with that. However, when our POST¬†request to our EDD store encounters a redirect without a specified status code, cURL redirects to a GET¬†request for the homepage with no query¬†parameters. Since that’s obviously not what we want, the EDD SL update script fails silently (like it should) so we don’t clutter the users dashboard with errors in case our store is down for any reason. In our case though, we want to preserve that POST¬†request and any data we send. So how do we do that?

Enter the Magical Hero Extraordinaire Status Code‚ĄĘ, 307

According to HTTP Status Code Spec, a 307 code should behave thusly:

The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.

Perfect! That’s exactly what we want! In short, this will preserve our POST¬†request and any data we send. So now we can use this in conjunction with a normal 301 request to route all of our traffic through our newly secured domain.

RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteCond %{REQUEST_METHOD} GET
RewriteRule ^(.*)$ https://example.com/$1 [R=301,L]
RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://example.com/$1 [R=307,L]

So line by line, let’s go through this new .htaccess.

  1. Turn on Rewrite Engine
  2. Check for a request on port 80
  3. Check for the GET request method
  4. Redirect (via a 301 for SEO purposes) and ignore all other rewrite rules ([L] flag).

Now if we don’t have a GET¬†request (maybe we have an API that uses PUT, POST, or DELETE) it will get routed through the second rule.

  1. Check for a request on port 80.
  2. Perform a 307 redirect, thus preserving our request method and data. Also, ignore any more rewrite rules ([L] flag).

Now for those of you worried about SEO, don’t worry. Google only cares about your visible stuff, AKA GET¬†requests. You shouldn’t get a penalty for other request types (because Google won’t even know to access them).

As you can see, 307 redirects can be extremely powerful. With the correct caching headers, you can even instruct clients to cache the results while you update your API or tools to use your new secured endpoints, without sacrificing security.

So what next?

Since we now have a our store properly configured to redirect requests from plugins out in the wild, there isn’t anything else to do except wait. Customers will update the plugin to the new version which uses the new https://¬†store endpoint. If you have detailed stats about version usage for your plugin, you could eventually remove the redirect, though I don’t see a reason to for the vast majority of plugin stores out there. If anything, it’s a good catch all for anything you may add in the future and forget about.

Questions? Comments? Leave ’em down below. I would love to hear from you!

Modifying Custom Taxonomy Forms in WordPress

Today, I was working on a client project and needed to add a little snippet underneath a hierarchical taxonomy form that was registered on a custom post type. Unfortunately, after spending about ten minutes digging through core and googling, I couldn’t find an action hook to use. Luckily, after tracing the output a little further back, I found what I was looking for.

Continue reading “Modifying Custom Taxonomy Forms in WordPress”

Publishing A WordPress Plugin without SVN, Utilizing Ship

So just this past week, I submitted my first plugin to the WordPress.org plugin repository. Needless to say, it was an exciting (yet dull) experience! Since I’m still fairly green to professional development as a whole, there were some aspects of the .org publishing process that were confusing, so I’m writing this post in hopes that someone will find it useful. ūüôā
Continue reading “Publishing A WordPress Plugin without SVN, Utilizing Ship”

Adding a custom style button in the WordPress post editor

So my title is a little misleading as what we’re creating is largely based on what your definition of a button. However, the point still stands ūüôā

Here is an example of what we’ll be creating today.

Screen Shot 2015-09-21 at 5.01.02 PM
Continue reading “Adding a custom style button in the WordPress post editor”

Crafting Personalized HTML E-Mails with wpMandrill

Recently, I’ve been working on a new little side project called ToDo, a small SPA that allows you to create and manage a ToDo list. It utilizes the new WordPress JSON REST API and it’s awesome. You should definitely check it out!

Edit: ToDo in its current form has unfortunately been “closed”. While you may still find something at the link, I’m not doing anything with it, and will be replacing it with something new, hopefully, in the very near future.
Continue reading “Crafting Personalized HTML E-Mails with wpMandrill”

Why WordPress 4.2.3 should have been a non-issue.

Let me preface the following¬†post¬†by saying this: I am sorry if your site was affected by the WP 4.2.3 update. I’m sure nobody on the core team wanted things to break and if they could go back in time and do things differently, then I think they would.

What happened?

So for the uninformed, on July 23, 2015 the WP core team released a security update that was installed to millions of websites via automatic updates. For a lot of people, this was fine and they weren’t affected by the change. However, another group of sites (some would argue a large portion) were broken due to breaking changes in the WP Shortcode API. This isn’t a post about the shortcode API or the vulnerability though. You can read about the update on the Make WP Core blog.

Why things should not have broken.

Take this section as a grain of salt, and recognize it as my opinion. You don’t have to agree, and I don’t expect everyone too.¬†

I have been developing on the WordPress platform for about 3 years now. I currently work for a small digital marketing agency in Northern Kansas as the sole developer creating websites for our clients. Before I started working here, ¬†the previous developer had built a lot of sites using premium themes from Themeforest, which as we all know, usually include a bunch of shortcodes for inserting content and functionality across the site. After doing support for a few of these clients, I quickly realized that shortcodes were not a solution for displaying content for our clients. Not only were they messy and difficult for clients to understand (in some instances) but they created a dependency that I was not comfortable with on such a widespread scale. A client accidentally deactivating a plugin could remove some core pieces of content; it was something I just wasn’t comfortable with.

I started to build out themes for our clients using _S and designs from our in house designer. I jumped onto the ACF bandwagon just before the release of v5 and have been using the Pro version on all of our clients’ sites with great success (Elliot’s commitment to backwards compatibility is another post for another time). ACF allowed me to create highly customized interfaces for our clients, while utilizing an API that would work regardless of if ACF was activated or installed (check out Bill Erickson’s post about removing dependencies from ACF). Over time, I have migrated some clients over to a more customized solution when they would come to us after having difficulty manipulating shortcodes and they have loved it.

Now I know what you’re thinking, “But Daron, using ACF ties you to a theme and that’s bad if you’re a blogger who likes flexibility!” And you’re right. Tethering some clients to a specific theme just isn’t an option, but that isn’t an excuse to build solutions that are fundamentally flawed.¬†One of the use cases that really blew my mind,¬†was how people were outputting and manipulating URLs via shortcode. A much better approach would¬†have been¬†to use custom fields and post meta and then calling that data from the theme. Another alternative would be to build a custom plugin with a simple admin interface and some custom metaboxes.

In the Make WP post, Andrew mentioned how people were using the shortcode API in ways that it was never intended for. You don’t need to look any further than the default [gallery] shortcode to see what the core team was talking about. The gallery shortcode is completely self contained. It outputs it’s own HTML and any attributes needed to attach styling to. If you’re building shortcodes to output a link to an image or to a URL, then you’re fundamentally using shortcodes in the wrong way. Using actions and filters, you can display all sorts of things in and around the_content. If you need to link a background image, you should be using the customizer or a custom field/metabox. For one, using a custom metabox is a vastly superior UX as well as giving you the peace of mind that sites won’t break because of a fundamentally flawed implementation. The shortcode codex entry¬†does not mention using shortcodes as a replacement for HTML attributes and it probably never will. If you’re site needs to be that customizable, you should be using custom fields and metaboxes.

If you want to remove the theme dependency from solutions like ACF and CMB2, use a hook and display that data programatically. This is a totally valid way to show custom content:

<?php
add_filter( 'the_content', function( $content ){
$extra_content = get_post_meta( 'extra_content', $post->ID );
if ( ! empty($extra_content ) )
return $content . $extra_content ;
else
return $content;
}, 10, 1 );

Just pop that into a plugin file and you’re off to the races. If you need more programatic control over where content is displayed, then you should look into developing a custom theme or using a more inclusive shortcode. Note, the $extra_content variable should include all of the HTML necessary to support itself. You can enqueue CSS as needed from the plugin as well.

Here is another example that was specifically mentioned in the Make WP article and a workaround that doesn’t require hacking a theme.

<?php
add_filter( 'the_content', function( $content ){
// Just pretend you have a custom metabox that saves the ID for you.
$image_id = get_post_meta( 'custom_image', $post->ID );
$image_src = ''; // instantiate variable
if ( !empty( $image_id ) ){
$image_src = wp_get_attachment_image_src( $image_id, 'large');
}
if ( !empty( $image_src ) ){
return "<div class='has-bg-image' style='background-image: url(\" {$image_src} \" );'>" . $content . "</div>";
}
// fallback for if there is no image
return $content;
}, 10, 1 );

Using the above snippet, you can wrap the_content in a div and apply a background image to that div. Using CSS you can further position the image how you want. You can even include that CSS in the same plugin, thus removing any theme dependency.

In closing

We as developers are responsible for when things break and it’s only in our best interest to use the tools that we are given to build applications and solutions that will stand the test of time. Hacking APIs is never a good solution to a problem and it’s up to us to use some common sense and read the documentation on the intended uses of the software that we leverage everyday. Sure, people will always find new ways to use software that the developers never intentioned, however, you do not have the right to be upset when you’ve implemented an unofficial workaround for a problem that is easily solved by utilizing other techniques and APIs. The core team did everything they could to avoid breaking things. Problem is,¬†it’s impossibly to foresee every possible unofficial use-case for every API in the software. Stop giving them flack for your experimental implementations of the API.

I know this probably seems like a disorganized rant, and in some ways, it probably is, but I needed to get it out there and off of my mind. Leave a comment if you want to continue the discussion or you can tweet me @Daronspence