DragonFly BSD
DragonFly users List (threaded) for 2005-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: upgrade from FreeBSD 4.11-RELEASE


From: Bill Hacker <wbh@xxxxxxxxxxxxx>
Date: Thu, 24 Feb 2005 20:46:47 +0800

Jake Maciejewski wrote:

On Thu, 2005-02-24 at 17:27 +0800, Bill Hacker wrote:

Janet Sullivan wrote:


Are there any known issues upgrading from FreeBSD 4.11 to Dragonfly, or should the instructions at http://www.dragonflybsd.org/docs/upgrade-freebsd.cgi still work just fine?

4.10 and later have a progressively increasing number of 5.X'ish backports included. (I run 4.9 thru 4.11-STABLE).



When I upgraded from 4.10, the only thing I noticed was that some of the changes in UPDATING had already been made.

I do not know if those steps will still work without further ado, but I can say that if you can do so, a 'clean' DragonFlyBSD install from ISO, newfs and all, is *very much* faster - and very trouble-free.


I agree that binary is cleaner and faster, but if you don't want the
downtime or want a custom kernel or optomizations, you'd have to build
from source eventually anyway.


Agree w/r building from source, either right after install or later, but expect fewer 'distractions' if done on a box that was
DragonFlyBSD from the outset (or most recent newfs, anyway <g>.


Nothing is perfect - we've been 'bitten' a few times on ordinary FreeBSD changes.

Going 4.X to 5.X, then having - with the taste of chaos in one's mouth - to go back to 4.X, was not a lot of fun - which is why I am here <g>. DragonFlyBSD is fixing what's broke, not what ain't broke.

Nowadays we break the RAID1, remount as two disks, install the new stuff to one of the drives while the other carries the traffic, copy what we need to preserve, reboot to the new, then overwrite the old from the new 'merged' drive, recreate the RAID1, edit fstab, and reboot. Sounds complicated, but works well 'over the wire'.

Rebuild is background work, may take the better part of a full day, but the box is in production nearly the whole time, needs only a couple of 40-90 second reboots, plus (recommended under atacontrol RAID1, anyway) time for a forced fsck in /etc/rc, not just a preen, prior to going multi.

Seldom requires a trip to the data centre.

FWIW, we always do the cvsup and make cycle *twice*. Back-to-back.

Insures we have built the latest with the latest, as it takes exactly one *bit* wrong in the wrong place to produce grief.

YMMV,

Bill




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]