Showing posts with label yum. Show all posts
Showing posts with label yum. Show all posts

Saturday, July 16, 2016

Fixing "DB_RUNRECOVERY: Fatal error, run database recovery" when attempting to run yum update

Comcast is my current home ISP. Over the last year, I've had a ton of problems with them filtering all sorts of legitimate (outbound) traffic. The latest fun times I've had is the random dropping of SSH connections on both standard (22) and non-standard TCP ports. This occurred while I was running a `yum update` on one of my servers and I hadn't used `nohup` or `disown` to allow the processes I had spawned to continue to run.

By the time I had got a VPN connection up and running, the yum process had been terminated, which in turn caused yum's database to become corrupt. How can you tell that your server's yum database is corrupt? Running yum will generate this vaguely-terrifying error:

# yum update
error: rpmdb: BDB0113 Thread/process 4498/140039588845376 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 -  (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed

Here is what I did to resolve this issue:

1) I created a back of the yum database files referenced in the `cannot open Packages database in /var/lib/rpm` line of the error:

    # mv /var/lib/rpm/__db* /tmp/

You can never go wrong with backing up your data before troubleshooting. I do it reflexively.

2) I then used the following command to rebuild the database indices from the installed package headers:

    # rpm -rebuilddb

3) I cleaned out all of yum's caches and rpm header files.

    # yum clean all

At this point, the few solutions to this issue I found online before writing this article claim the issue should be resolved. Under some circumstances, this would probably work on its own. But for me, because my connection was severed by the Internet gurus at my ISP, I also had existing transactions pending in yum,

In my situation, running `yum update` again at this point produced this notification after yum began processing dependencies:

There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).

So I needed to take one more step.

4) I finally ran `yum-complete-transaction` with the --cleanup-only flag. I wasn't entirely confident about how the database rebuild impacted the transaction list, so I wanted to get rid of it instead of trying to fix it.

    # yum-complete-transaction --cleanup-only

Fortunately, this fixed things for me and I didn't have to worry about "removing/installing packages by hand". However, I'd like to throw in a quick note about that and the "maybe package-cleanup can help" note in the same warning message.

The very, very first time I saw that package-cleanup error I wasn't sure what they were talking about. I thought it might have been a typo and they mean to refer to the similar `yum clean packages` command. After all, package-cleanup is:
  1. not a native yum command
  2. not a part of the core yum package and
  3. not a separate package
So what is it? package-cleanup is part of the yum-utils package. I know my failure to understand what package-cleanup is might make me look like a dummy, but its doubtful I'm the only one who got a little confused. Check out this incredibly helpful wiki page for the application on the yum website (and be sure to mis-spell "PackageClenup" with 2 A's instead of 3 in the URI to get to the webpage):

Awesome.
I'm giving the yum guys a hard time, though. I've never read a wiki entry for `grep` and I use it every day. I only take issue with using yum-utils to resolve this specific problem: because I hadn't already installed yum-utils prior to this problem occurring, I wasn't crazy about the idea of installing the yum-utils package in the middle of trying to resolve a problem with the yum core package. I wouldn't want to install something like this from source, and installing the package using yum or rpm could very likely fail.

That's all for today. Have a good weekend, folks.

Friday, October 2, 2015

Fedora Project's RHEL yum repo has been throwing errors since yesterday UPDATED

A few of my Red Hat servers run cron jobs to check for updates. starting yesterday (Thursday October 1st, 2015) at around 3PM I encountered 503 unavailable errors when attempting to contact a Fedora Project URL that hosts the metalink for the rhui-REGION-rhel-server-releases repository - a core RHEL repository for EC2.

Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=x86_64 error was
14: HTTPS Error 503 - Service Unavailable

3 hours later or so, the URL began responding again, but the problems remained. `yum` now reports corrupted update announcements from the repo:

Update notice RHSA-2014:0679 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
You should report this problem to the owner of the rhui-REGION-rhel-server-releases repository.
Update notice RHSA-2014:1327 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHEA-2015:0372 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHBA-2015:0335 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHEA-2015:0371 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHSA-2015:0416 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHBA-2015:0303 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHBA-2015:0556 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHSA-2015:0290 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHBA-2015:0596 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHBA-2015:0578 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHSA-2015:0716 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHSA-2015:1115 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHBA-2015:1533 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHSA-2015:1586 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.
Update notice RHSA-2015:1705 (from rhui-REGION-rhel-server-releases) is broken, or a bad duplicate, skipping.

I sent a tweet to Fedora to hopefully get some feedback. Because this wasn't a super critical issue I've been slacking on troubleshooting as well I will update here and/or provide a new post with more info.

UPDATE: I am increasingly convinced that this is an error with the repository and not something with my server. Check out the following command output:

Nothing marked as out of sync:
# yum distro-sync
Loaded plugins: amazon-id, rhui-lb
No packages marked for distribution synchronization

No problems listed by `package-cleanup`:
#package-cleanup --problems
Loaded plugins: amazon-id, rhui-lb
No Problems Found

`yum check` finds nothing:
# yum check
Not loading "rhnplugin" plugin, as it is disabled
Loading "amazon-id" plugin
Not loading "product-id" plugin, as it is disabled
Loading "rhui-lb" plugin
Not loading "subscription-manager" plugin, as it is disabled
Config time: 0.012
Yum version: 3.4.3
rpmdb time: 0.000
check all

The cache has been cleaned (repeatedly):
# yum clean all
Not loading "rhnplugin" plugin, as it is disabled
Loading "amazon-id" plugin
Not loading "product-id" plugin, as it is disabled
Loading "rhui-lb" plugin
Not loading "subscription-manager" plugin, as it is disabled
Config time: 0.021
Yum version: 3.4.3
Cleaning repos: epel rhui-REGION-client-config-server-7 rhui-REGION-rhel-server-optional rhui-REGION-rhel-server-releases rhui-REGION-rhel-server-rh-common
Cleaning up everything

No orphans:
# package-cleanup --orphans
Not loading "rhnplugin" plugin, as it is disabled
Loading "amazon-id" plugin
Not loading "product-id" plugin, as it is disabled
Loading "rhui-lb" plugin
Not loading "subscription-manager" plugin, as it is disabled
Config time: 0.012
Setting up Package Sacks
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/os
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/os
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/debug
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/debug
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/debug
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/rhui-client-config/rhel/server/7/x86_64/os
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/source/SRPMS
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/os
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/source/SRPMS
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/debug
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/source/SRPMS
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/debug
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/source/SRPMS
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/debug
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/os
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/source/SRPMS
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/os
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/source/SRPMS
mirrorlist: https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/os
pkgsack time: 0.327
rpmdb time: 0.000
atomic-release-1.0-19.el7.art.noarch

By default, EC2 instances automatically repopulate mirrorlist URLs configured in /etc/yum.repos.d/*.repo files using the region in which the instance is hosted, like this:

mirrorlist=https://rhui2-cds01.REGION.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/$releasever/$basearch/os

I've manually updated the relevant .repo file with my region and upped the debugging level variables for yum-cron to try to narrow things down a bit. No answers yet ...

LATEST UPDATE (11-19): I believe I somewhat figured this out quite a while ago, but I just haven't had the time to update this post.

Amazon manages the licensing information for EC2 instances with operating systems that require it - like Windows and RHEL. So, the short answer is: Amazon broke it. I can't remember off-hand what the licensing agreement is in relation to this particular issue. I do know that I was still paying the exorbitant monthly rate for an RHEL-licensed instance. And I certainly received no notification that my RHEL license was expiring.

This was a very bad experience. The fact is, there are very few reasons why a non-enterprise scale user would ever use RHEL as opposed to CentOS. For Enterprise users that do require licensing, I would highly recommend looking into a Satellite-based updating solution. I'm not sure ATM what the logistics of doing such a thing using a platform like Amazon, but I am sure to be doing my homework on the subject shortly.

Sunday, July 26, 2015

Errors with Nikto installation or operation within OpenVAS

When installing the vulnerability scanner application Nikto/Nikto2 using yum with RedHat Enterprise Linux 7 or CentOS 7 or even Scientific Linux 7, the odds are good that you will encounter some irritating problems. Namely, the installation will fail while requiring a dependency that appears to not exist for the version of linux you are using. Fun! So you probably think you are safe if you install OpenVAS, a prepackaged suite of security utilities that includes Nikto as a plugin. But you would be wrong! Installing OpenVAS from an RPM will succeed, and everything will look fine, until you try to use Nikto within OpenVAS, which will result in a fatal error.

Nikto is included in the Extra Packages for Enterprise Linux/EPEL yum repository all recent versions of RedHat linux, which is part of the Fedora Project. While it contains third party applications, it is not a third party repository like RPMFusion or Atomicorp. I have only very rarely had problems with the EPEL yum repo, and this is the first time I have had problems with it in years.

So here is what the failure looks like:

[root@ip-172-31-20-10 notes]# yum install nikto
Loaded plugins: amazon-id, rhui-lb
Resolving Dependencies
--> Running transaction check
---> Package nikto.noarch 1:2.1.5-10.el7 will be installed
--> Processing Dependency: perl(LW2) for package: 1:nikto-2.1.5-10.el7.noarch
--> Finished Dependency Resolution
Error: Package: 1:nikto-2.1.5-10.el7.noarch (epel)
           Requires: perl(LW2)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Alternatively, if you are going the OpenVAS route, your scan report will include the following error from the Nikto plugin:

Can't locate LW2.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl 
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/bin/nikto line 63.
BEGIN failed--compilation aborted at /usr/bin/nikto line 63.

The studious reader will have noticed a common theme to the failure: a Perl module going by the mysterious initials "LW2". The initials stand for perl-libwhisker2. LibWhisker2 is in fact a library for Perl, focusing on http functions. It is commonly used by vulnerability scanners. However, to make matters a bit more complicated, more recent versions of Nikto require a slightly modified version of LibWhisker2, as can be seen from the Nikto installation guide (italics mine):
The only required Perl module that does not come standard is LibWhisker. Nikto comes with and is configured to use a local LW.pm file (in the plugins directory). As of Nikto version 2.1.5, the included LibWhisker differs (slightly) from the standard LibWhisker 2.5 distribution.
LibWhisker has always been somewhat of a pain in the ass for Nikto users. Eight years ago, when LibWhisker updated from version 1.x to version 2.x, Red Hat users found themselves unable to install Nikto when the repositories all dropped version 1.x from their package lists, even while the Nikto installer still required the previous version. Its obvious then that the package for installing the LibWhisker library has been packaged in a variety of Red Hat repositories for years. As of Red Hat 7, it is no longer included. Why? Who knows.

So how about just finding a third party repository that has addressed this issue, adding that repository to your server, and calling it a day? Seems reasonable enough, however, I looked at several repositories and I could only find one - Atomicorp - that appears to have patched this issue. Furthermore, there are many administrators who are wary of adding third party repositories to servers. Vulnerability scanners collect a wealth of very sensitive information. Even excellent third party repositories require that users provide a significant amount of trust in installing using their packages. To many admins, adding a third party repo simply is not an option.

Fortunately, I have confirmed for the time being that a previous RPM included in repositories for Fedora Core 19 will resolve the issues listed in this post. I have uploaded the LibWhisker2 RPM to my rarely-used Github page should anyone else need it. Remember you need to install Perl first, before installing the RPM.

NOTE: If you plan on using Nikto with Metasploit, you will require two additional Perl modules to correctly use logging: RPC::XML and RPC::XML::Client. Both of these are available through the EPEL yum repo using `yum install perl-RPC-XML.noarch`. This dependency is pretty clearly outlined in the Nikto installation documentation (and not required for a basic Nikto installation, like LibWhisker).

Saturday, October 4, 2014

Amazon EC2 Connectivity Failures - 10/4/2014

I have seen indications of periodic connectivity issues impacting Amazon's EC2 Cloud Computing architecture. Personally, I have encountered issues with connecting to Amazon's Yum repository hosts from EC2 instances.

Amazon has published Outage notifications of brief connectivity and DNS failures impacting US-EAST-1 Availability zone between October 2nd and October 4th. However, my EC2 instances are within the US-WEST-2 Availability zone and I am experiencing issues today, October 4th 2014 at approximately 11:30 AM EST.

For example:

# yum provides seinfo
Loaded plugins: amazon-id, rhui-lb

epel/x86_64/filelists_db                                        | 4.7 MB  00:00:01
rhui-REGION-rhel-server-optional/7Server/x86_64/filelists_db    | 3.2 MB  00:00:00

https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/7Server/x86_64/os/repodata/e5ee2c196ee6525998525a2bf74bb40608dce199-filelists.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found

Trying other mirror.

https://rhui2-cds02.us-west-2.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/7Server/x86_64/os/repodata/e5ee2c196ee6525998525a2bf74bb40608dce199-filelists.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found

Then, 5 minutes later, with absolutely no changes to my server's network or yum configuration:

# host rhui2-cds01.us-west-2.aws.ce.redhat.com
rhui2-cds01.us-west-2.aws.ce.redhat.com has address 50.112.120.15

# yum provides seinfo
Loaded plugins: amazon-id, rhui-lb
setools-console-3.3.7-46.el7.x86_64 : Policy analysis command-line tools for SELinux
Repo        : rhui-REGION-rhel-server-releases
Matched from:
Filename    : /usr/bin/seinfo

I find this extremely frustrating. With my small presence on EC2, I have no ability to troubleshoot what is causing these issues. However, I can confirm that there *are* issues as of today, that Amazon has been aware of connectivity and DNS failures for at least two days, and that Amazon is currently claiming that there are no issues.

This is quickly becoming the industry-standard mode of behavior for Cloud computing providers: wild-eyed, outlandish promises of perfect availability followed by regular connectivity failures that are haphazardly brushed under the rug.

Customers are owed transparency. I remain convinced that the only way to accomplish reliability is by "doing it yourself" and colocating servers in multiple datacenters, implementing and managing redundancy directly. The issue is too important to trust to hosting providers who have consistently demonstrated dishonesty.

See for yourself the almost invisible notice Amazon has posted to customers on their Service Health Dashboard:

Amazon EC2 Buries Connectivity Failure Notifications
Downtime? What Downtime?

Friday, March 15, 2013

Installing nslookup, whois and host on Centos Version 6.*

So you've just run a barebones installation of CentOS 6, and you run a host command to check DNS resolution and you get the following:

whois: command not found

By default the barebones CentOS installation lacks even the most basic network diagnostic tools. Use yum to install the following packages to get a few basic tools back on your server:

yum install bind-utils 

(installs nslookup and host)

yum install jwhois

(installs whois)

Billing systems development now available

Good news for current and future clients of Josh Wieder Technical Consulting : customers can now retain a variety of unique services related...