ImageX Media: Complete Content Marketing with Drupal

Drupal Planet - Fri, 2016-08-19 17:11

At its most basic, content marketing is about maintaining or changing consumer behaviour. Or more elaborately, it’s “a marketing technique of creating and distributing valuable, relevant and consistent content to attract and acquire a clearly defined audience -- with the objective of driving profitable customer action.”

Categories: Drupal

ImageX Media: Want to be a Content Marketing Paladin? Then Automate Your Content Production Workflows with These (Free) Tools

Drupal Planet - Fri, 2016-08-19 17:07

Flat-lining content experiences and withering conversion rates can be the kiss of death to almost any website. When content experiences deteriorate one issue seems to make an appearance time and time again: the amount of time and resources required to produce and manage content marketing initiatives. Among the many best practices and strategies that will accelerate growth includes the all-powerful move towards productivity automation. 

Categories: Drupal

ImageX Media: Debugging Your Migrations in Drupal 8

Drupal Planet - Fri, 2016-08-19 12:04

One of the most useful features of Drupal 8 is the migration framework in core, and there are already plenty of plugins to work with different sources that are available in contributed modules. 

When writing your own code, it must always be debugged. As migrations can only be started with Drush, the debugging can be a bit challenging. And it gets even more interesting when you develop your website in a Vagrant box. 

In this tutorial, we will go through setting up xDebug and PhpStorm to debug your migrations.

Categories: Drupal

OSTraining: How to Use the Drupal Group Module

Drupal Planet - Fri, 2016-08-19 06:22

In this tutorial, I'm going to explain how you can use the new Group module to organize your site's users. Group is extremely powerful Drupal 8 module.

At the basic level, Group allows you to add extra permissions to content. 

At the more advanced level, this module is potentially a Drupal 8 replacement for Organic Groups.

Categories: Drupal

Mediacurrent: Friday 5: 5 Ways to Use Your Browser Developer Tools

Drupal Planet - Fri, 2016-08-19 05:12

TGIF! We hope the work week has treated you well.

Categories: Drupal

Nuvole: Optimal deployment workflow for Composer-based Drupal 8 projects

Drupal Planet - Fri, 2016-08-19 04:20
Considerations following our Drupal Dev Day Milan and Drupalaton presentations; and a preview of our DrupalCon training.

This post is an excerpt from the topics covered by our DrupalCon Dublin training: Drupal 8 Development - Workflows and Tools.

During the recent Nuvole presentations at Drupal Dev Days Milan 2016 and Drupalaton Hungary 2016 we received a number of questions on how to properly setup a Drupal 8 project with Composer. An interesting case where we discovered that existing practices are completely different from each other is: "What is the best way to deploy a Composer-based Drupal 8 project?".

We'll quickly discuss some options and describe what works best for us.

What to commit

You should commit:

  • The composer.json file: this is obvious when using Composer.
  • The composer.lock file: this is important since it will allow you to rebuild the entire codebase at the same status it was at a given point in the past.

The fully built site is commonly left out of the repository. But this also means that you need to find a way for rebuilding and deploying the codebase safely.

Don't run Composer on the production server

You would clearly never run composer update on the production server, as you want to be sure that you will be deploying the same code you have been developing upon. For a while, we considered it to be enough to have Composer installed on the server and run composer install to get predictable results from the (committed) composer.lock file.

Then we discovered that this approach has a few shortcomings:

  • The process is not robust. A transient network error or timeout might result in a failed build, thus introducing uncertainty factors in the deploy scripts. Easy to handle, but still not desirable as part of a delicate step such as deployment.

  • The process will inevitably take long. If you run composer install in the webroot directly, your codebase will be unstable for a few minutes. This is orders of magnitude longer than a standard update process (i.e., running drush updb and drush cim) and it may affect your site availability. This can be circumvented by building in a separate directory and then symlinking or moving directories.

  • Even composer install can be unpredictable, especially on servers with restrictions or running different versions of Composer or PHP; in rare circumstances, a build may succeed but yield a different codebase. This can be mitigated by enforcing (e.g., through Docker or virtualization) a dev/staging environment that matches the production environment, but you are still losing control on a relatively lengthy process.

  • You have no way of properly testing the newly built codebase after building it and before making it live.

  • Composer simply does not belong in a production server. It is a tool with a different scope, unrelated to the main tasks of a production server.

Where to build the codebase? CI to the rescue

After ruling out the production server, where should the codebase be built then?

Building it locally (i.e., using a developer's environment) can't work: besides the differences between the development and the production (--no-dev) setup, there is the risk of missing possible small patches applied to the local codebase. And a totally clean build is always necessary anyway.

We ended up using Continuous Integration for this task. Besides the standard CI job, which operates after any push operation to the branches under active development, performs a clean installation and runs automated tests, another CI job builds the full codebase based on the master branch and the composer.lock file. This allows sharing it between developers, a fast deployment to production through a tarball or rsync, and opportunities for actually testing the upgrade (with a process like: automatically import the production database, run database updates, import the new configuration, run a subset of automated tests to ensure that basic site functionality has no regressions) for maximum safety.

Slides from our recent presentations, mostly focused on Configuration Management but covering part of this discussion too, are below.

Tags: Drupal PlanetDrupal 8DrupalConTrainingAttachments:  Slides: Configuration Management in Drupal 8
Categories: Drupal

Flickr: Website Development and Designing Company India | Bliss Web solution

Drupal Talk - Fri, 2016-08-19 03:10

Bliss Web Solution posted a photo:

Bliss Web Solution is a website development and designing company in India. We offer professional website development and designing service at reasonable price. We are also providing best Website Services in Ahmedabad and Gujarat.

We are also providing Ecommerce Web development like Magento,BigCommerce, CMS Web development like Wordpress, Joomla, Drupal,Website design and web development services Php,CakePhp,Codeigniter Development and Digital Marketing services like SEO, SMO, SMM, SEM and PPC.

For more visit us: www.blisswebsolution.com/

You can also join us on Facebook, Twitter, Linkedin and Youtube.

Categories: Drupal

Jim Birch: Styling Views Exposed Filters Selects in Drupal 8

Drupal Planet - Fri, 2016-08-19 02:20

Styling the HTML <select> tag to appear similar in all the different browsers is a task unto itself.  It seems on each new site , I find myself back visiting this post by Ivor Reić for a CSS only solution.  My task for today is to use this idea to theme an exposed filter on a view.

The first thing we need to do is add a div around the select.  We can do this by editing the select's twig template from Drupal 8 core's stable theme.  Copy the file from

/core/themes/stable/templates/form/select.html.twig to


Then add the extra <div class="select-style"> and closing </div> as so.

Here is the LESS file that I compile which includes Ivor's CSS, but also some adjustments I added to event the exposed filter out. Each rule is commented, explaining what they do.

I will compile this into my final CSS and we are good to go.  The display of the form, and the select list should be pretty accurate to what I want across all modern browsers.  Adjust as needed for your styles and design.

Read more

Categories: Drupal

Flickr: Drupal Theme Development Services - OSSMedia Ltd

Drupal Talk - Thu, 2016-08-18 21:35

GulinoBarnes posted a photo:

Looking to hire expert Drupal developers? OSSMedia Ltd is a leading Drupal development company that provides a full range of Drupal theme development services. bit.ly/1T6NefF

Categories: Drupal

Zivtech: Staff Augmentation and Outsourced Training: Do You Need It?

Drupal Planet - Thu, 2016-08-18 12:53
The goal of any company is to reduce costs and increase profit, especially when it comes to online and IT projects. When an IT undertaking is a transitional effort, it makes sense to consider staff augmentation and outsourcing.

Consider the marketing efforts of one worldwide corporation. Until recently, each brand and global region built and hosted its own websites independently, often without a unified coding and branding standard. The result was a disparate collection of high maintenance, costly brand websites. A Thousand Sites: One Goals
The organization has created nearly a thousand sites in total, but those sites were not developed at the same time or with the same goals. That’s a pain point. To solve this problem, the company decided to standardize all of its websites onto a single reference architecture, built on Drupal.

The objective of the new proprietary platform includes universal standards, a single platform that can accommodate regional feature sets, automated testing, and sufficient features that work for 95% of use cases for the company’s websites globally.

While building a custom platform is a great step forward, it must then be implemented, and staff needs to be brought up to speed. To train staff on technical skills and platforms, often the best solution is to outsource the training to experts who step in, take over training and propel the effort forward quickly.

As part of an embedded team, an outsourced trainer is an adjunct team member, attending all of the scrum meetings, with a hand in the future development of the training materials. Train Diverse Audiences
A company may invest a lot of money into developing custom features, and trainers become a voice for the company, showing people how easy it is to implement, how much it is going to help, and how to achieve complex tasks such as activation processes. The goal is to get people to adopt the features and platform. Classroom style training allows for exercises on live sites and familiarity with specific features. The Training Workflow
Trainers work closely with the business or feature owner to build a curriculum. It’s important to determine the business needs that inspired the change or addition.

Starting with an initial outline, trainers and owners work together. Following feedback, more information gets added to flesh it out. This first phase can take four to five sessions to get the training exactly right for the business owner. For features that follow, the process becomes streamlined. It's more intuitive because the trainer has gotten through all the steps and heard the pain points, but it’s important to always consult the product owner. Once there is a plan, the trainers rehearse the curriculum to see what works, what doesn’t work, what’s too long, and where they need to cut things. Training Now & Future
Training sessions may be onsite or remote. It is up to the business to decide if attendance is mandatory. Some staffers may wish to attend just to keep up with where the business is going.

Sessions are usually two hours with a lot of time for Q&A. With trainings that are hands-on, it’s important to factor in time for technical difficulties and different levels of digital competence.

Remote trainings resemble webinars. Trainers also create videos to enable on demand trainings. They may be as simple as screencasts with a voiceover, but others have a little more work involved. Some include animations to demo tasks in a friendlier way before introducing a more static backend form. It is the job of the trainer to tease out what’s relevant to a wide net of audiences.

The training becomes its own product that can live on. The recorded sessions are valuable to onboard and train up future employees. Trainers add more value to existing products and satisfy management goals.
Categories: Drupal

Chromatic: Migrating (away) from the Body Field

Drupal Planet - Thu, 2016-08-18 10:46

As we move towards an ever more structured digital world of APIs, metatags, structured data, etc., and as the need for content to take on many forms across many platforms continues to grow, the humble “body” field is struggling to keep up. No longer can authors simply paste a word processing document into the body field, fix some formatting issues and call content complete. Unfortunately, that was the case for many years and consequently there is a lot of valuable data locked up in body fields all over the web. Finding tools to convert that content into useful structured data without requiring editors to manually rework countless pieces of content is essential if we are to move forward efficiently and accurately.

Here at Chromatic, we recently tackled this very problem. We leveraged the Drupal Migrate module to transform the content from unstructured body fields into re-usable entities. The following is a walkthrough.


On this particular site, thousands of articles from multiple sources were all being migrated into Drupal. Each article had a title and body field with all of the images in each piece of content embedded into the body as img tags. However, our new data model stored images as separate entities along with the associated metadata. Manually downloading all of the images, creating new image entities, and altering the image tags to point to the new image paths, clearly was not a viable or practical option. Additionally, we wanted to convert all of our images to lazy loaded images, so having programmatic control over the image markup during page rendering was going to be essential. We needed to automate these tasks during our migration.

Our Solution

Since we were already migrating content into Drupal, adapting Migrate to both migrate the content in and fully transform it all in one repeatable step was going to be the best solution. The Migrate module offers many great source classes, but none can use img elements within a string of HTML as a source. We quickly realized we would need to create a custom source class.

A quick overview of the steps we’d be taking:

  1. Building a new source class to find img tags and provide them as a migration source.
  2. Creating a migration to import all of the images found by our new source class.
  3. Constructing a callback for our content migration to translate the img tags into tokens that reference the newly created image entities.
Building the source class

Migrate source classes work by finding all potential source elements and offering them to the migration class in an iterative fashion. So we need to find all of the potential image sources and put them into an array that can used a source for a migration. Source classes also need to have a unique key for each potential source element. During a migration the getNextRow() method is repeatedly called from the parent MigrateSource class until it returns FALSE. So let's start there and work our way back to the logic that will identify the potential image sources.

** * Fetch the next row of data, returning it as an object. * * @return object|bool * An object representing the image or FALSE when there is no more data available. */ public function getNextRow() { // Since our data source isn't iterative by nature, we need to trigger our // importContent method that builds a source data array and counts the number // of source records found during the first call to this method. $this->importContent(); if ($this->matchesCurrent < $this->computeCount()) { $row = new stdClass(); // Add all of the values found in @see findMatches(). $match = array_shift(array_slice($this->matches, $this->matchesCurrent, 1)); foreach ($match as $key => $value) { $row->{$key} = $value; } // Increment the current match counter. $this->matchesCurrent++; return $row; } else { return FALSE; } }

Next let's explore our importContent() method called above. First, it verifies that it hasn't already been executed, and if it has not, it calls an additional method called buildContent().

/** * Find and parse the source data if it hasn't already been done. */ private function importContent() { if (!$this->contentImported) { // Build the content string to parse for images. $this->buildContent(); // Find the images in the string and populate the matches array. $this->findImages(); // Note that the import has been completed and does not need to be // performed again. $this->contentImported = TRUE; } }

The buildContent() method calls our contentQuery() method which allows us to define a custom database query object. This will supply us with the data to parse through. Then back in the buildContent() method we loop through the results and build the content property that will be parsed for image tags.

/** * Get all of the HTML that needs to be filtered for image tags and tokens. */ private function buildContent() { $query = $this->contentQuery(); $content = $query->execute()->fetchAll(); if (!empty($content)) { // This builds one long string for parsing that can done on long strings without // using too much memory. Here, we add fields ‘foo’ and ‘bar’ from the query. foreach ($content as $item) { $this->content .= $item->foo; $this->content .= $item->bar; } // This builds an array of content for parsing operations that need to be performed on // smaller chunks of the source data to avoid memory issues. This is is only required // if you run into parsing issues, otherwise it can be removed. $this->contentArray[] = array( 'title' => $item->post_title, 'content' => $item->post_content, 'id' => $item->id, ); } }

Now we have the the logic setup to iteratively return row data from our source. Great, but we still need to build an array of source data from a string of markup. To do that, we call our custom findImages() method from importContent(), which does some basic checks and then calls all of the image locating methods.

We found it is best to create methods for each potential source variation, as image tags often store data in multiple formats. Some examples are pre-existing tokens, full paths to CDN assets, relative paths to images, etc. Each often requires unique logic to parse properly, so separate methods makes the most sense.

/** * Finds the desired elements in the markup. */ private function findImages() { // Verify that content was found. if (empty($this->content)) { $message = 'No html content with image tags to download could be found.'; watchdog('example_migrate', $message, array(), WATCHDOG_NOTICE, 'link'); return FALSE; } // Find images where the entire source content string can be parsed at once. $this->findImageMethodOne(); // Find images where the source content must be parsed in chunks. foreach ($this->contentArray as $id => $post) { $this->findImageMethodTwo($post); } }

This example uses a regular expression to find the desired data, but you could also use PHP Simple HTML DOM Parser or the library of your choice. It should be noted that I opted for a regex example here to keep library-specific code out of my code sample. However, we would highly recommend using a DOM parsing library instead.

/** * This is an example of a image finding method. */ private function findImageMethodOne() { // Create a regex to look through the content. $matches = array(); $regex = '/regex/to/find/images/'; preg_match_all($regex, $this->content, $matches, PREG_SET_ORDER); // Set a unique row identifier from some captured pattern of the regex- // this would likely be the full path to the image. You might need to // perform cleanup on this value to standardize it, as the path // to /foo/bar/image.jpg, example.com/foo/bar/image.jpg, and // http://example.com/foo/bar/image.jpg should not create three unique // source records. Standardizing the URL is key for not just avoiding // creating duplicate source records, but the URL is also the ID value you // will use in your destination class mapping callback that looks up the // resulting image entity ID from the data it finds in the body field. $id = ‘http://example.com/foo/bar/image.jpg’; // Add to the list of matches after performing more custom logic to // find all of the correct chunks of data we need. Be sure to set // every value here that you will need when constructing your entity later. $this->matches[$id] = array( 'url' => $src, 'alt' => $alttext, 'title' => $description, 'credit' => $credit, 'id' => $id, 'filename' => $filename, 'custom_thing' => $custom_thing, ); } Importing the images

Now that we have our source class complete, let's import all of the image files into image entities.

/** * Import images. */ class ExampleImageMigration extends ExampleMigration { /** * {@inheritdoc} */ public function __construct($arguments) { parent::__construct($arguments); $this->description = t('Creates image entities.'); // Set the source. $this->source = new ExampleMigrateSourceImage(); ...

The rest of the ExampleImageMigration is available in a Gist, but it has been omitted here for brevity. It is just a standard migration class that maps the array keys we put into the matches property of the source class to the fields of our image entity.

Transforming the image tags in the body

With our image entities created and the associated migration added as a dependency, we can begin sorting out how we will convert all of the image tags to tokens. This obviously assumes you are using tokens, but hopefully this will shed light on the general approach, which can then be adapted to your specific needs.

Inside our article migration (or whatever you happen to be migrating that has the image tags in the body field) we implement the callbacks() method on the body field mapping.

// Body. $this->addFieldMapping('body', 'post_content') ->callbacks(array($this, 'replaceImageMarkup'));

Now let's explore the logic that replaces the image tags with our new entity tokens. Each of the patterns references below will likely correspond to unique methods in the ExampleMigrateSourceImage class that find images based upon unique patterns.

/** * Converts images into image tokens. * * @param string $body * The body HTML. * * @return string * The body HTML with converted image tokens. */ protected function replaceImageMarkup($body) { // Convert image tags that follow a given pattern. $body = preg_replace_callback(self::IMAGE_REGEX_FOO, `fooCallbackFunction`, $body); // Convert image tags that follow a different pattern. $body = preg_replace_callback(self::IMAGE_REGEX_BAR, `barCallbackFunction`, $body); return $body;

In the various callback functions we need to do several things:

  1. Alter the source string following the same logic we used when we constructed our potential sources in our source class. This ensures that the value passed in the $source_id variable below matches a value in the mapping table created by the image migration.
  2. Next we call the handleSourceMigration() method with the altered source value, which will find the destination id associated with the source id.
  3. We then use the returned image entity id to construct the token and replace the image markup in the body data.
$image_entity_id = self::handleSourceMigration('ExampleImageMigration', $source_id); Implementation Details

Astute observers will notice that we called self::handleSourceMigration(), not $this->handleSourceMigration. This is due to the fact that the handleSourceMigration() method defined in the Migrate class is not static and uses $this within the body of the method. Callback functions are called statically, so the reference to $this is lost. Additionally, we can't instantiate a new Migration class object to get around this, as the Migrate class is an abstract class. You also cannot pass the current Migrate object into the callback function, due to the Migrate class not supporting additional arguments for the callbacks() method.

Thus, we are stuck trying to implement a singleton or global variable that stores the current Migrate object, or duplicating the handleSourceMigration() method and making it work statically. We weren’t a fan of either option, but we went with the latter. Other ideas or reasons to choose the alternate route are welcome!

If you go the route we chose, these are the lines you should remove from the handleSourceMigration method in the Migrate class when you duplicate it into one of your custom classes.

- // If no destination ID was found, give each source migration a chance to - // create a stub. - if (!$destids) { - foreach ($source_migrations as $source_migration) { - // Is this a self reference? - if ($source_migration->machineName == $this->machineName) { - if (!array_diff($source_key, $this->currentSourceKey())) { - $destids = array(); - $this->needsUpdate = MigrateMap::STATUS_NEEDS_UPDATE; - break; - } - } - // Break out of the loop if a stub was successfully created. - if ($destids = $source_migration->createStubWrapper($source_key, $migration)) { - break; - } - } - }

Before we continue, let's do a quick recap of the steps along the way.

  1. We made an iterative source of all images from a source data string by creating the ExampleMigrateSourceImage class that extends the MigrateSource class.
  2. We then used ExampleMigrateSourceImage as a migration source class the in the ExampleImageMigration class to import all of the images as new structured entities.
  3. Finally, we built our "actual" content migration and used the callbacks() method on the body field mapping in conjunction with the handleSourceMigration() method to convert the existing image markup to entity based tokens.
The end result

With all of this in place, you simply sit back and watch your migrations run! Of course before that, you get the joy of running it countless times and facing edge cases with malformed image paths, broken markup, new image sources you were never told about, etc. Then at the end of the day you are left with shiny new image entities full of metadata that are able to be searched, sorted, filtered, and re-used! Thanks to token rendering (if you go that route), you also gain full control over how your img tags are rendered, which greatly simplifies the implementation of lazy-loading or responsive images. Most importantly, you have applied structure to your data, and you are now ready to transform and adapt your content as needed for any challenge that is thrown your way!

Categories: Drupal

Jeff Geerling's Blog: Increase the Guzzle HTTP Client request timeout in Drupal 8

Drupal Planet - Thu, 2016-08-18 09:56

During some migration operations on a Drupal 8 site, I needed to make an HTTP request that took > 30 seconds to return all the data... and when I ran the migration, I'd end up with exceptions like:

Migration failed with source plugin exception: Error message: cURL error 28: Operation timed out after 29992 milliseconds with 2031262 out of 2262702 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html).

The solution, it turns out, is pretty simple! Drupal's \Drupal\Core\Http\ClientFactory is the default way that plugins like Migrate's HTTP fetching plugin get a Guzzle client to make HTTP requests (though you could swap things out if you want via services.yml), and in the code for that factory, there's a line after the defaults (where the 'timeout' => 30 is defined) like:

Categories: Drupal

Drupal core announcements: We can add big new things to Drupal 8, but how do we decide what to add?

Drupal Planet - Thu, 2016-08-18 05:48

Drupal 8 introduced the use of Semantic Versioning, which practically means the use of three levels of version numbers. The current release is Drupal 8.1.8. While increments of the last number are done for bugfixes, the middle number is incremented when we add new features in a backwards compatible way. That allows us to add big new things to Drupal 8 while it is still compatible with all your themes and modules. We successfully added some new modules like BigPipe, Place Block, etc.

But how do we decide what will get in core? Should people come up with ideas, build them, and once they are done, they are either added in core or not? No. Looking for feedback at the end is a huge waste of time, because maybe the idea is not a good fit for core, or it clashes with another improvement in the works. But how would one go about getting feedback earlier?

We held two well attended core conversations at the last DrupalCon in New Orleans titled The potential in Drupal 8.x and how to realize it and Approaches for UX changes big and small both of which discussed a more agile approach to avoid wasting time.

The proposal is to separate the ideation and prototyping process from implementation. Within the implementation section the potential use of experimental modules helps with making the perfecting process more granular for modules. We are already actively using that approach for implementation. On the other hand the ideation process is still to be better defined. That is where we need your feedback now.

See https://www.drupal.org/node/2785735 for the issue to discuss this. Looking forward to your feedback there.

Categories: Drupal

Mediacurrent: How Drupal won an SEO game without really trying

Drupal Planet - Thu, 2016-08-18 05:23

At Mediacurrent we architected and built a Drupal site for a department of a prominent U.S. university several years ago. As part of maintaining and supporting the site over the years, we have observed how well it has performed in search engine rankings, often out-performing other sites across campus built on other platforms.

Categories: Drupal

KnackForge: Drupal Commerce - PayPal payment was successful but order not completed

Drupal Planet - Thu, 2016-08-18 03:00
Drupal Commerce - PayPal payment was successful but order not completed

Most of us use PayPal as a payment gateway for our eCommerce sites. Zero upfront, No maintenance fee, API availability and documentation makes anyone easy to get started. At times online references offer out-dated documentation or doesn't apply to us due to account type (Business / Individual), Country of the account holder, etc. We had this tough time when we wanted to set up Auto return to Drupal website.

Thu, 08/18/2016 - 15:30 Tag(s) Drupal planet Drupal 7 DropThemes.in drupal-commerce
Categories: Drupal

Unimity Solutions Drupal Blog: Video Annotations: A Powerful and Innovative Tool for Education

Drupal Planet - Wed, 2016-08-17 23:51

According to John J Medina a famous molecular biologist “Vision trumps all other senses.” Human mind is attracted to remember dynamic pictures rather than listen to words or read long texts. Advancement in multimedia has enabled teachers to impart visual representation of content in the class room.

Categories: Drupal

Drupalize.Me: Learn by Mentoring at DrupalCon

Drupal Planet - Wed, 2016-08-17 23:37

DrupalCon is a great opportunity to learn all kinds of new skills and grow professionally. For the 3 days of the main conference in Dublin (September 27–29) there will be sessions on just about everything related to Drupal that you could want. One amazing opportunity that you may not be aware of though is the Mentored Sprint on Friday, September 30th. This is a great place for new folks to learn the ropes of our community and how to contribute back. What may be less talked about is the chance to be a mentor.

Categories: Drupal

Roy Scholten: Vetting Drupal product ideas

Drupal Planet - Wed, 2016-08-17 14:57

We’ve made big strides since Drupalcon New Orleans in how we add new features to Drupal core. The concept of experimental modules has already helped us introduce features like a new way to add blocks to a page, content moderation and workflow tools and a whole new approach for editing all the things on a page while staying at that page.

In New Orleans we started to define the process for making these kinds of big changes. Probably the most important and defining aspect of it is that we’re (finally!) enabling a clearer separation between vetting ideas first, implementation second.

True to form we specified and detailed the latter part first :-)

So, on to that first part, vetting Drupal product ideas. In my core conversation I outlined the need for making bigger UX changes, faster and suggested possible approaches for how to design and develop those, borrowing heavily from the Lean UX method

Since then, we’ve been reminded that we really do need a clearly defined space to discuss the strategic value of proposed new features. A place to decide if a given idea is desirable and viable as an addition to core.

The point being: core product manager with help from Drupal UX team members wrote up a proposal for how to propose core product ideas and what’s needed to turn a good idea into an actionable plan.

It needs your feedback. Please read and share your thoughts.

Tags: drupaluxprocessdrupalplanetSub title: Agree on why and what before figuring out the how
Categories: Drupal

Mediacurrent: DrupalCon NOLA: The People Behind the Usernames

Drupal Planet - Wed, 2016-08-17 11:33

As we work every day on our own projects, with our own deadlines and priorities, it is often too easy to forget about the entire community of others using Drupal in much the same way. When we're working with Drupal in our various capacities, there is no shortage of methods to interact with the community and contribute back, but those aren't the focus of this post.

Categories: Drupal

myDropWizard.com: Drupal 6 security updates for Panels!

Drupal Planet - Wed, 2016-08-17 11:16

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security releases for the Panels modules for multiple Access Bypass vulnerabilities.

The first vulnerability allows anonymous users to use AJAX callbacks to pull content and configuration from Panels, which allow them to access private data. And the second, allows authenticated users with permission to use the Panels IPE to modify the Panels display for pages that they don't have permission to otherwise edit.

See the security advisory for Drupal 7 for more information.

Here you can download the patch for 6.x-3.x!

If you have a Drupal 6 site using the Panels module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Categories: Drupal
Syndicate content

Powered By