Acrolinx Release Notes (including subsequent service releases)

Version 2019.02 (February 2019) Acrolinx Private Cloud

Release Summary

This release is for Acrolinx Private Cloud customers only.  Again, the big theme for this release is automation. This is good news if you're integrating Acrolinx into any kind of automated business process.

You'll be glad to know that we've enhanced the Platform API so that you can get more fine-grained scoring information. And Acrolinx can now automatically populate your custom fields based on metadata in your documents. This metadata includes YAML-based front matter in Markdown files and any other kind of YAML file such as Swagger API specifications. We hope these new features allow you to enhance your automation projects and ensure that quality scoring is built into the most crucial areas of your content lifecycle.

Improvements

Automatically Populate Your Custom Fields with Metadata from Your Documents

Analytics is only useful if your data is clean - especially if you have custom fields. However, it's often difficult to get people to fill out custom fields properly. Now, you can configure Acrolinx to extract data from your document and automatically populate your custom fields. No human intervention necessary.

For example, suppose that you have custom fields for "product" and "department" that need to be filled out for each document that you check. You can configure Acrolinx, so that whenever someone checks a document, the product and department fields are filled out automatically. This feature only works with structured documents. For example, a suitable document could be an XML document that has "product" and "department" attributes in the header section. In this case, you would write XPath expressions for your custom fields so that Acrolinx knows where to find the relevant data.

We hope that this feature saves you a lot of time and headache in managing the quality of your analytics data.

Check Content in YAML (YAML Ain't Markup Language) Files

Our reach has now extended even further into the universe of technical content formats. You can now check YAML files for language quality. This should be great news for those of you who write API documentation according to the OpenAPI Specification (formerly Swagger Specification). You can now check the quality of your endpoint and parameter descriptions directly in the source YAML file. Of course, this isn't just great news for API documentation - anyone who creates structured content in YAML can now benefit from this new feature.

Extract Information from YAML Front Matter

If you work with static site generators like Jekyll or Hugo, or other markdown-based toolchains, you can now extract valuable metadata from the front matter section of your Markdown files - as long as it's written in YAML.

This feature works together with the new custom data extraction feature that we mentioned earlier. First, you can identify the properties in your front matter that are relevant for custom fields. Then, you configure Acrolinx to automatically populate those custom fields based on the metadata in your front matter. You no longer have to enter information once in your front matter and then again in the Acrolinx Core Platform.

Check Confidential Documents Without Storing Any Sensitive Content in the Platform

Sometimes, you need to check documents that are extremely confidential. You don't want the text being stored in some database somewhere where you have limited control over who can see it. Confidential checking solves this problem.

You can now configure an Acrolinx Content Profile so that all content is checked confidentially. This means that the issue text isn’t stored in the Analytics database. The only person who has access to the full details of a check is the person who originally checked the document. With confidential checking, even your legal department can check their content without worrying about a breach of confidentiality.

Get Full Access to Fine-Grained Scoring Information in the Core Platform API

If you've tried to integrate Acrolinx into your CI (Continuous Integration) systems, you might have noticed that not all scoring information was available in the Acrolinx API. This made it harder to implement business logic that depended on certain scoring thresholds. We've remedied this so that every score breakdown that you see in the Scorecard is now accessible via the API as well. This gives you much more freedom to implement the right business logic for your CI.

Filter Findability Results by Time Frame

We've updated the Findability dashboard so that you can filter the results by time frame. When you open the Findability dashboard in the Analytics section, you'll see a new "Time Range" slider. This feature is useful if you made changes to your Findability configuration and only want to focus on content that was checked after you made your changes.

Report on Integration Usage by Content Group

The Integration Tracking dashboard is a great analytics tool to gauge the reach that Acrolinx has in your content technology stack and user base. However, it previously wasn't possible to report on the types of content that was being checked in different Acrolinx Integrations.

The ability to report by Content Groups is especially useful if you've anonymized information on individual users. It's sometimes the only reliable way to tell which Acrolinx Integrations certain teams are using.

That's why we've added Content Groups as a filter parameter in the Integration Tracking dashboard. Even if you can't break the results down by user, you can still generally tell who is checking what content and with what integration. For example, suppose that you want to know who's using the Acrolinx Integration for PowerPoint. You could create a Content Group for the directories where the engineering team keeps their presentations. You could do the same thing for the Product Marketing team and the Sales team. You could then report on the checking volume for the PowerPoint integration and break down by the Engineering, Product Marketing, and Sales Content Groups. This new feature gives you even more valuable insights about the Acrolinx uptake in your organization.

Discovered Keywords Are More Relevant

We've updated the algorithm that identifies Discovered Keywords for Findability so that some common words are filtered out.  You'll no longer see Discovered Keywords like "file", "information" or "product" because it would rarely make sense to optimize your content for these kinds of keywords.

Easily Review the Original Contexts of Spelling and Terminology Issues

When you open a Content Analysis dashboard, you can see the original contexts of each issue by clicking the "Guideline Details" link. However, if you wanted to focus on just one issue type such as spelling or terminology, there was no easy way to do this. Now, we've added links to "Spelling Details" and "Terminology Details" in the "Details" section. When you click these links, you see only the contexts for spelling issues and terminology issues respectively. In the top spelling and top terminology sections, you can also click the text of an issue and get details about the original contexts for just that specific issue. This enhancement should make it much easier for you to troubleshoot non-issues that are caused by incorrectly configured guidance settings.

Contribute Terms to a Third-Party Terminology Database Directly from the Sidebar 

This is a feature that some of you are familiar with from classic integrations. We know you were hanging out for it in the Sidebar. Now it's here! When Acrolinx discovers terms, you can now contribute them directly to your third-party terminology database from the Sidebar.

Bug Fixes

It Wasn't Possible to Report on Command Line Checker (CLC) Usage in the Integrations Tracking Dashboard

The Integrations Tracking dashboard is supposed to show data for absolutely all integrations. However, in the previous version of the Acrolinx Platform, the CLC wasn't showing up in the results. We've fixed this bug so that you can get usage reporting on the CLC as well as all the other Acrolinx Integrations.

In Some Circumstances, You Couldn't Assign a Checking Profile to Specific Users

Usually, it's no problem to assign a Checking Profile to one or more individual users. However, a small bug crept into the last release which prevented this function from working correctly. When you filtered for users, the full names of some users appeared in the dropdown instead of the user ID. The problem is, Checking Profiles don't work if you target users by their full name. The feature only works with user IDs. We've now fixed this issue so that you can only select user IDs in the assignment criteria for a Checking Profile.

However, if you've already assigned Checking Profiles to users by their full names, such as "Kenny Larkin", you'll need to remove those assignments first. You can then reassign the same user by their user ID, such as "k.larkin@demo-inc.com".

The Federated Authentication Sign-In Page Displayed an Error Message

If you've tried to use Federated Authentication to log into the Acrolinx Platform, you might have noticed an ugly error message along the lines of MISSING TRANSLATION "BUTTON.ADMINISTRATIVE_LOGIN". We've fixed the issue that was causing this error message and we've updated our tests so this shouldn't happen again.

On Red Hat Enterprise Linux (RHEL), the Core Platform Saved Log Files in the Wrong Directory

The Core Platform is supposed to write the log files in <INSTALL_DIR>/server/logs but on RHEL, the log files were being written to /tmp instead.  We’ve fixed this issue so that all log files are written to the correct directory.

People Couldn't Open Links to Terms in the Term Browser If the Link Contained the "Locale" Parameter

When you share terms with other users, you can send them direct links to terms in Acrolinx Term Browser. In the link URLs, you can add the "locale" parameter so that people see the Term Browser interface in their native language. However, when people opened links with the locale parameter set, the Term Browser opened in an infinite loop and was unusable. We've fixed this behavior so that adding a "locale" parameter no longer causes any issues.

Checking PDF Files Always Resulted in an Error

If you tried to check PDFs in any integration such as the Content Analyzer or the CLC, the check failed and the Core Platform logged the following error message:

Error: Could not parse input as PDF, the check will be canceled due to failure. The error reported by the parser: "Illegal base64 character -1"

This issue was caused by double-encoding and has now been fixed so that you can check PDFs again. 

Checking Large PowerPoint Files Resulted in an Ugly Error Message

If you tried to check PowerPoint files in any integration such as the Content Analyzer or the CLC, the check failed and the Core Platform logged a cryptic error message that resembled the following example.

2019/01/31 17:40:54.367 | [WARN ][] Issue (API error reference '4c28ca90-794f-44e6-9817-3306f7b92394'): 
2019/01/31 17:40:54.367 | java.io.IOException: Resetting to invalid mark

This issue occurred when checking large PowerPoint files upwards of 10 MB. We've addressed this issue by raising the acceptable size limit for PowerPoint files to 256MB and by providing a human-friendly error message when the issue still occurs.