# Should portsnap be taking this long?



## dvl@ (Sep 4, 2013)

I started a portsnap at 1:50am. It's now 2:12 and it's still running.  Is this within the range of normal?  I think not... and I think I know why.


```
$ ps auwx | grep snap
root    38261   0.0  0.0 44424  2464  0  I+    1:50AM     0:00.01 sudo portsnap fetch update
root    38262   0.0  0.0 14504  2148  0  I+    1:50AM     0:00.02 /bin/sh /usr/sbin/portsnap fetch update
root    47534   0.0  0.0  9912  1544  0  I+    1:54AM     0:00.00 xargs /usr/libexec/phttpget your-org.portsnap.freebsd.org
root    47535   0.0  0.0 14504  2148  0  I+    1:54AM     0:00.08 /bin/sh /usr/sbin/portsnap fetch update
root    47536   0.0  0.0 10044  1712  0  I+    1:54AM     0:00.12 /usr/libexec/phttpget your-org.portsnap.freebsd.org bp/6553a02f599faa9af19b96195053f6f92778e45c0cc35e22d762d6ee7b
dan     48236   0.0  0.0 16280  1676  1  RL+   2:13AM     0:00.00 grep snap

$ sudo portsnap fetch update
Password:
Looking up portsnap.FreeBSD.org mirrors... 7 mirrors found.
Fetching snapshot tag from your-org.portsnap.freebsd.org... done.
Fetching snapshot metadata... done.
Updating from Sat Aug 31 20:04:21 UTC 2013 to Wed Sep  4 01:39:59 UTC 2013.
Fetching 3 metadata patches.. done.
Applying metadata patches... done.
Fetching 0 metadata files... done.
Fetching 1513 patches.....10....20....30....40....50....60....70....80....90....100....110....120....130....140....150....160....170....180....190....200....210....220....230....240....250....260....270....280....290....300....310....320....330....340....350....360....370....380....390....400....410....420....430....440....450....460....470....480....490....500....510....520....530....540....550....560....570....580....590....600....610....620....630....640....650....660....670....680....690....700...
.710....720....730....740....750....760....770....780....790....800....810....820....830....840....850....860....870....880....890....900....910....920....930....940....950....960....970....980....990....1000....1010....1020....1030....1040....1050....1060....1070....1080....1090....1100....1110....1120....1130....1140....1150....1160....1170....1180....1190....1200....1210....1220....1230....1240....1250....1260....1270....1280....1290....1300....1310....1320....1330....1340....1350....1360....1370....1380....1390....1400....1410....1420....1430....1440....1450....1460....1470....1480....1490....1500....1510. done.
Applying patches...
```

FYI, right after the first paste, the 'Applying patches' message appeared.  Overall, I think this took about 25 minutes on a FreeBSD 9.1 box with an AMD Phenom(tm) II X4 945 Processor (3010.21-MHz K8-class CPU) and 8 GB of RAM.  Load average was near 0 at the time.

Could fetching via IPv6 (which will probably fail on this network) instead of IPv4 be the cause?  I ask because:


```
[dan@bast:~/tmp] $ time fetch http://www.freebsd.org
fetch: http://www.freebsd.org: size of remote file is not known
www.freebsd.org                                         27 kB  349 kBps

real	1m15.232s
user	0m0.001s
sys	0m0.004s
[dan@bast:~/tmp] $ time fetch -4 http://www.freebsd.org
fetch: http://www.freebsd.org: size of remote file is not known
www.freebsd.org                                         27 kB  347 kBps

real	0m0.237s
user	0m0.000s
sys	0m0.005s
```

I think I either need to disable IPv6 on this server or find a way for Portsnap to use IPv4 only.


----------



## Whattteva (Sep 4, 2013)

I can't recall even fetch extract taking that long. I think there is definitely something else wrong. Your system specs got more than enough muscle to blaze through a simple fetch update.


----------



## marwis (Sep 4, 2013)

Is your harddrive in good shape?


----------



## J65nko (Sep 4, 2013)

You could have checked with `netstat -an` whether you are using IPv4 or IPv6


```
[cmd=#]netstat -an[/cmd]
Active Internet connections (including servers)
Proto Recv-Q Send-Q Local Address          Foreign Address        (state)
tcp4       0      0 192.168.222.240.46382  [color=blue]46.137.83.240.80[/color]       ESTABLISHED
tcp4       0     52 192.168.222.240.22     192.168.222.20.39440   ESTABLISHED
```
Here it is  IPv4.


```
[cmd=%]dig +short -x 46.137.83.240[/cmd]
ec2-46-137-83-240.eu-west-1.compute.amazonaws.com.

[cmd=%]dig +short portsnap.freebsd.org[/cmd]
46.137.83.240
```

So maybe you just had bad luck in connecting to a very busy Amazon cloud data center


----------



## SirDice (Sep 4, 2013)

If you look at the download speed both are around 350 KBps. But the time taken is a lot more for IPv6. That leads me to believe it's a resolving issue rather than a connection issue.


----------



## dvl@ (Sep 4, 2013)

SirDice said:
			
		

> If you look at the download speed both are around 350 KBps. But the time taken is a lot more for IPv6. That leads me to believe it's a resolving issue rather than a connection issue.



FYI, both downloads would be accomplished via IPv4.  The IPv6 gateway is not functional at present.


----------



## dvl@ (Sep 4, 2013)

marwis said:
			
		

> Is your harddrive in good shape?



They should be:


```
$ zpool status
  pool: system
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Sun Sep  1 03:03:59 2013
config:

        NAME           STATE     READ WRITE CKSUM
        system         ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            gpt/disk0  ONLINE       0     0     0
            gpt/disk1  ONLINE       0     0     0
            gpt/disk2  ONLINE       0     0     0

errors: No known data errors
```

In addition, smartdrv isn't reporting any issues.


----------



## dvl@ (Sep 4, 2013)

J65nko said:
			
		

> You could have checked with `netstat -an` whether you are using IPv4 or IPv6



This is telling:


```
$ netstat -na
Active Internet connections (including servers)
Proto Recv-Q Send-Q Local Address          Foreign Address        (state)
tcp6       0      0 2001:470:1f06:b8.62890 2001:4978:1:420:.80    SYN_SENT
```

And... we're waiting...


```
# time portsnap fetch update
Looking up portsnap.FreeBSD.org mirrors... 7 mirrors found.
Fetching snapshot tag from your-org.portsnap.freebsd.org... done.
Fetching snapshot metadata... done.
Updating from Wed Sep  4 02:05:46 UTC 2013 to Wed Sep  4 11:16:09 UTC 2013.
Fetching 3 metadata patches.^C

real    3m0.128s
user    0m0.032s
sys     0m0.063s
```

Yes, we have no IPv6 connection...


----------



## ShelLuser (Sep 4, 2013)

I noticed that the server which is primarily used on your end is your-org.portsnap.freebsd.org. Getting a bit curious about all this I just tried a portsnap session myself, only this time fixating it on this particular update server: `# portsnap -s your-org.portsnap.freebsd.org fetch update`.

For the record: portsnap normally picks ec2-eu-west-1.portsnap.freebsd.org on my end.

But the difference in speed when all the numbers started to appear was noticeable to me.

My advice would be to try that one out yourself. Check /var/db/portsnap/serverlist and simply pick another server besides your-org.portsnap.freebsd.org and try using that. I think that can make a difference.

Hope this can help.


----------



## dvl@ (Sep 4, 2013)

This worked much faster (note, -s used, not -t):


```
# portsnap -s ec2-sa-east-1.portsnap.freebsd.org fetch update
Looking up ec2-sa-east-1.portsnap.freebsd.org mirrors... none found.
Fetching snapshot tag from ec2-sa-east-1.portsnap.freebsd.org... done.
Fetching snapshot metadata... done.
Updating from Wed Sep  4 02:05:46 UTC 2013 to Wed Sep  4 11:59:08 UTC 2013.
Fetching 3 metadata patches.. done.
Applying metadata patches... done.
Fetching 0 metadata files... done.
Fetching 142 patches.....10....20....30....40....50....60....70....80....90....100....110....120....130....140. done.
Applying patches... done.
Fetching 1 new ports or files... done.
Removing old files and directories... done.
Extracting new files:
/usr/ports/MOVED
/usr/ports/audio/gnaural/
/usr/ports/audio/gtkpod/
/usr/ports/audio/libadplug/
...
```

It completed in well under 30 s.

FYI, if I add that hostname to /etc/portsnap.conf, then `portsnap fetch update` completes in expected time.


----------



## SirDice (Sep 4, 2013)

dvl@ said:
			
		

> FYI, both downloads would be accomplished via IPv4.  The IPv6 gateway is not functional at present.


No, but it might resolve to it's IPv6 address, tries to connect, fails and then tries the IPv4 address. That could account for the time difference. I get the same kind of delay when my IPv6 tunnel is broken.

I do believe there's a sysctl(8) that controls if an IPv6 address should be tried first or not. Setting this to try IPv4 first might be a solution.


----------



## dvl@ (Sep 4, 2013)

As a further test, I fixed my IPv6 tunnel and reverted my above-mentioned change to /etc/portsnap.conf.  Now the command runs quickly:


```
# portnap': portsnap fetch update
Looking up portsnap.FreeBSD.org mirrors... 7 mirrors found.
Fetching snapshot tag from sourcefire.portsnap.freebsd.org... done.
Fetching snapshot metadata... done.
Updating from Wed Sep  4 11:59:08 UTC 2013 to Wed Sep  4 12:42:08 UTC 2013.
Fetching 1 metadata patches. done.
Applying metadata patches... done.
Fetching 0 metadata files... done.
Fetching 19 patches.....10.... done.
Applying patches... done.
Fetching 0 new ports or files... done.
Removing old files and directories... done.
Extracting new files:
/usr/ports/mail/dkfilter/
/usr/ports/mail/fetchyahoo/
/usr/ports/mail/ftrack/
/usr/ports/mail/mavbiff/
/usr/ports/mail/minimalist/
/usr/ports/mail/p5-Email-Reply/
/usr/ports/mail/p5-MIME-EncWords/
/usr/ports/mail/p5-Mail-MailStats/
/usr/ports/mail/p5-Mail-MboxParser/
/usr/ports/mail/p5-Mail-POP3Client/
/usr/ports/mail/p5-Mail-SRS/
/usr/ports/mail/p5-Mail-Tools/
/usr/ports/mail/p5-Net-POP3-SSLWrapper/
/usr/ports/mail/p5-Net-SenderBase/
/usr/ports/mail/postfix-logwatch/
/usr/ports/mail/postpals/
/usr/ports/mail/rlytest/
/usr/ports/mail/squirrelmail-secure_login-plugin/
/usr/ports/mail/squirrelmail-vlogin-plugin/
Building new INDEX files... done.
```

Conclusion: the IPv6 tunnel was up and working, but each fetch done by `portsnap`, and it does multiple fetches, had to individually time out.  That is what took so long.


----------



## dvl@ (Sep 4, 2013)

SirDice said:
			
		

> No, but it might resolve to it's IPv6 address, tries to connect, fails and then tries the IPv4 address. That could account for the time difference. I get the same kind of delay when my IPv6 tunnel is broken.



Agreed.  When you said 'resolving issue', I read 'resolving problem'.


----------



## ShelLuser (Sep 4, 2013)

SirDice said:
			
		

> I do believe there's a sysctl(8) that controls if an IPv6 address should be tried first or not. Setting this to try IPv4 first might be a solution.


I don't know about that, but I do know about ip6addrctl(8) which can control this.


----------



## dvl@ (Sep 4, 2013)

ShelLuser said:
			
		

> I don't know about that, but I do know about ip6addrctl(8) which can control this.



FYI:

```
[cmd=$]ip6addrctl[/CMD]
Prefix                          Prec Label      Use
::1/128                           50     0        0
::/0                              40     1    21433
2002::/16                         30     2        0
::/96                             20     3        0
::ffff:0.0.0.0/96                 10     4        0
```


----------



## SirDice (Sep 4, 2013)

ShelLuser said:
			
		

> I don't know about that, but I do know about ip6addrctl(8) which can control this.


Ah, yes. That's what I was looking for.


----------

