Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 370/889 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • CSS and HTML incoherences when declaring multiple classes

    - by Cesco
    I'm learning CSS "seriously" for the first time, but I found the way you deal with multiple CSS classes in CSS and HTML quite incoherent. For example I learned that if I want to declare multiple CSS classes with a common style applied to them, I have to write: .style1, .style2, .style3 { color: red; } Then, if I have to declare an HTML tag that has multiple classes applied to it, I have to write: <div class="style1 style2 style3"></div> And I'm asking why? From my personal point of view it would be more coherent if both could be declared by using a comma to separate each class, or if both could be declared using a space; after all IMHO we're still talking about multiple classes, in both CSS and HTML. I think that it would make more sense if I could write this to declare a div with multiple classes applied: <div class="style1, style2, style3"></div> Am I'm missing something important? Could you explain me if there's a valid reason behind these two different syntaxes?

    Read the article

  • How can I express this nginx config as apache2 rewrite rules?

    - by codecowboy
    if (!-e $request_filename){ rewrite /iOS/(.*jpg)$ /$1 last; rewrite /iOS/(.*jpeg)$ /$1 last; rewrite /iOS/(.*png)$ /$1 last; rewrite /iOS/(.*css)$ /$1 last; rewrite /iOS/(.*js)$ /$1 last; rewrite /Android/(.*jpg)$ /$1 last; rewrite /Android/(.*jpeg)$ /$1 last; rewrite /Android/(.*png)$ /$1 last; rewrite /Android/(.*css)$ /$1 last; rewrite /Android/(.*js)$ /$1 last; rewrite ^/(.*)$ /?route=$1 last; } There are some vanity URLs e.g. mysite.com/yourdetails which are handled internally by a router class (its a PHP app with index.php as the entry point) and they seem to work fine on nginx but not Apache :-/ I tried this but the vanity URLs are not working RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?route=$1 [L] I'd like to rule out Apache config first before I get too deep into the code.

    Read the article

  • Remotely turn on a device that is powered off?

    - by njboot
    Verbatim from the interview between Brian Williams and Edward Snowden last night: Snowden: Any intelligence service in the world that has significant funding and a real technological research team can own that phone [refering to Williams' iPhone] as soon as it connects to a network; it can be theirs. Williams: Can anyone turn it on remotely if it's off? Snowden: They can absolutely turn it on with the power turned off on the device. I concede Snowden's first point, fine. But the last? Does Snowden's last claim, that a powered off mobile device can be remotely turned, have any merit? It seems highly improbable and seemingly impossible.

    Read the article

  • Mac OS X Server add server user

    - by Meltemi
    What's the recommended way to add a user to Mac OS X Server that doesn't need all the hoopla associated with Workgroup Manager? There are many users pre-configured in Mac OS X Server (www, root, ldapadmin, etc.) that don't have "Full Name" or mail accounts, etc. I'd like to create a 'svn' user to be the owner of our Subversion Repository as per this tutorial: If you've decided to use either Apache or stock svnserve, create a single svn user on your system and run the server process as that user. Be sure to make the repository directory wholly owned by the svn user as well. From a security point of view, this keeps the repository data nicely siloed and protected by operating system filesystem permissions, changeable by only the Sub- version server process itself. Wondering if there's a way outside of WorkgroupManager and OpenDirectory as this account will be entirely server based. Is this still sound advice under OS X Server? If so what's the easiest way to create the user (Mac OS X Server doesn't seem to respond to useradd).

    Read the article

  • Migrating Roaming Profiles from one drive to another

    - by Jared
    As the title suggests, how can I migrate roaming profiles located in one drive (starting to fill up already) to another? Current share is like this "SVR1\Shares\UserProfiles\%username%\ But of course, this is located in C:/Shares/UserProfiles/%username%/ What do I need to do? Do I simply copy/paste into the bigger(RAID1) drive and then repoint all the profile paths (using AD Users&Computers profile properties)? What if I can point this to a different file server all together? Best practices? tips? anything you guys can suggest. Thanks!

    Read the article

  • Varnish 3.0.2 to Apache2 sometimes return error 503

    - by Ronnie Jespersen
    Hey guys I hope you can help me out here. I have an Ngingx parsing http and https to a varnish cache(3.0.2). From the varnish it is sent to apache2. Now I have for some time been tracking some strange 503 errors. But I cant seem to find the silver bullet. Currently I am logging the 503 errors through varnish this way: sudo varnishlog -c -m TxStatus:503 >> /home/rj/varnishlog503.log and then referring to the apache access log to see if any 503 requests have been handled. Today I had a health check from the firewall that failed: 20 SessionOpen c 127.0.0.1 34319 :8081 20 ReqStart c 127.0.0.1 34319 607335635 20 RxRequest c HEAD 20 RxURL c /health-check 20 RxProtocol c HTTP/1.0 20 RxHeader c X-Real-IP: 192.168.3.254 20 RxHeader c Host: 192.168.3.189 20 RxHeader c X-Forwarded-For: 192.168.3.254 20 RxHeader c Connection: close 20 RxHeader c User-Agent: Astaro Service Monitor 0.9 20 RxHeader c Accept: */* 20 VCL_call c recv lookup 20 VCL_call c hash 20 Hash c /health-check 20 VCL_return c hash 20 VCL_call c miss fetch 20 Backend c 33 aurum aurum 20 FetchError c http first read error: -1 11 (No error recorded) 20 VCL_call c error deliver 20 VCL_call c deliver deliver 20 TxProtocol c HTTP/1.1 20 TxStatus c 503 20 TxResponse c Service Unavailable 20 TxHeader c Server: Varnish 20 TxHeader c Content-Type: text/html; charset=utf-8 20 TxHeader c Retry-After: 5 20 TxHeader c Content-Length: 879 20 TxHeader c Accept-Ranges: bytes 20 TxHeader c Date: Wed, 06 Jun 2012 12:35:12 GMT 20 TxHeader c X-Varnish: 607335635 20 TxHeader c Age: 60 20 TxHeader c Via: 1.1 varnish 20 TxHeader c Connection: close 20 Length c 879 20 ReqEnd c 607335635 1338986052.649786949 1338986112.648169994 0.000160217 59.997980356 0.000402689 Now the backend server (apache) does not have any 503 error in the access log at this point. So I am confused. Is this varnish throwing a 503 because it thinks apache is to slow? There is a lot traffic coming through at this point so I know the server is up and running. I do have other 503 error codes with posts and gets so there is really no pattern. It seems to be at random times and random requests. Even in the morning when the server dosen't seem to be doing anything. I do see another pattern in the log: 4 VCL_call c recv pass 4 VCL_call c hash 4 Hash c /?id=412 4 VCL_return c hash 4 VCL_call c pass pass 4 FetchError c no backend connection 4 VCL_call c error deliver 4 VCL_call c deliver deliver Here fetcherror says "no backend connection". A summery of the FetchErrors in todays log: 16 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 19 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 23 FetchError c http first read error: -1 11 (No error recorded) 24 FetchError c http first read error: -1 11 (No error recorded) 16 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 22 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 21 FetchError c http first read error: -1 11 (No error recorded) 26 FetchError c no backend connection 4 FetchError c no backend connection 20 FetchError c http first read error: -1 11 (No error recorded) 39 FetchError c http first read error: -1 11 (No error recorded) I haven't changed the default timeout values for varnish. This is my configuration for one of the backend servers. backend xenon { .host = "192.168.3.187"; .port = "80"; .probe = { .url = "/health-check/"; .interval = 3s; .window = 5; .threshold = 2; } } I'm running prefork module on apache2 with this configuration <IfModule mpm_prefork_module> StartServers 1 MinSpareServers 2 MaxSpareServers 5 MaxClients 200 MaxRequestsPerChild 75 </IfModule> and only PHP files is sent to the server. Every other static file is handled by Nginx. Any ideas? ------- EDIT -------------- Some more debuging information I have run a varnishadm debug.health Backend radon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002560 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend xenon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002760 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend iridium is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.000849 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend aurum is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002100 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy And I have been monitoring varnishstat from the two load balancers 3224774 3.99 2.61 backend_conn - Backend conn. success 27 0.00 0.00 backend_unhealthy - Backend conn. not attempted 63 0.00 0.00 backend_fail - Backend conn. failures 358798 0.00 0.29 backend_reuse - Backend conn. reuses 21035 0.00 0.02 backend_toolate - Backend conn. was closed 379834 0.00 0.31 backend_recycle - Backend conn. recycles 26 0.00 0.00 backend_retry - Backend conn. retry 3217751 5.99 2.61 backend_conn - Backend conn. success 32 0.00 0.00 backend_fail - Backend conn. failures 364185 0.00 0.30 backend_reuse - Backend conn. reuses 27077 0.00 0.02 backend_toolate - Backend conn. was closed 391263 0.00 0.32 backend_recycle - Backend conn. recycles 36 0.00 0.00 backend_retry - Backend conn. retry Notice that none of them have reported backend_fail. /Ronnie

    Read the article

  • configure: error: Could not find libavformat - part of ffmpeg after installing libav from source

    - by Patryk
    I tried to install minidlna on my machine but it appeared to me that it's not in repositories anymore. Well then I decided to compile it myself. After downloading version 1.1.3 I tried to compile but I needed libav headers which I couldn't install via apt - no idea why there has been a lot of broken packages: $ sudo apt-get install libavcodec-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libavcodec-dev : Depends: libavutil-dev (= 6:9.13-0ubuntu0.14.04.1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Anyways I installed libav 9.13 from source and now I came to this point: $ ./configure ... checking for av_open_input_file in -lavformat... no checking for avformat_open_input in -lavformat... no checking for av_open_input_file in -lavformat... no checking for avformat_open_input in -lavformat... no configure: error: Could not find libavformat - part of ffmpeg but I have installed that! Even in the install log I can see : ... INSTALL libavdevice/libavdevice.a INSTALL libavfilter/libavfilter.a INSTALL libavformat/libavformat.a INSTALL libavresample/libavresample.a INSTALL libavcodec/libavcodec.a INSTALL libswscale/libswscale.a INSTALL libavutil/libavutil.a INSTALL libavdevice/avdevice.h INSTALL libavdevice/version.h INSTALL libavdevice/libavdevice.pc INSTALL libavfilter/avfilter.h INSTALL libavfilter/avfiltergraph.h INSTALL libavfilter/buffersink.h INSTALL libavfilter/buffersrc.h INSTALL libavfilter/version.h INSTALL libavfilter/libavfilter.pc INSTALL libavformat/avformat.h ....

    Read the article

  • At what year in history was computers first used to store porn? [closed]

    - by Emil H
    Of course this sounds like a joke question, but it's meant seriously. I remember being told by an old system administrator back in the early nineties about people asking about good FTPs for porn, and that they would as a joke always tell them to connect to 127.0.0.1. They would come back saying that there was a lot of porn at that address, but that oddly enough it seemed like they already had it all. Point being, it seems like it's been around for quite a while. Anyway. Considering that a considerable portion of the internet is devoted to porn these days, it would be interesting to know if someone has any kind of idea as to when and where the phenomena first arose? There must be some mention of this in old hacker folk lore? (Changed to CW to emphasize that this isn't about rep, but about genuine curiousity. :)

    Read the article

  • How much Ruby should I learn before moving to Rails?

    - by Kevin
    Just a quick question.. I can never get a definitive answer when googling this, either. Some people say you can learn Rails without knowing any Ruby, but at some point you'll run into a brick wall and wish you knew Ruby and will have to go back to learn it..and some say to learn the "basics" of Ruby before learning Rails and it will make your life that much easier.. My current knowledge is low. I'm not a beginner, but I'm not pro, either. I went through the Learn Python The Hard Way online book in about a month, but I stopped once I got to the OOP side of Python (I know booleans, elif/if/else/statements, for loops, while loops, functions) I agree with learning the "basics" of Ruby before learning Rails, but what exactly are the "basics" of Ruby? Would I need to learn the whole OOP side of Ruby before I went on to Rails? Or would I just need to learn the Ruby syntax up to where I learned Python (booleans, elif/if/else/statements, for loops, while loops, functions) before I went on to Rails? Thanks!

    Read the article

  • How do I get a Canon MG, MP and MX series USB printer working?

    - by Old-linux-fan
    Printer MP6150 driver installed itself upon plugging in the printer. Printer is recognized (lsusb shows it) but does not mount. If the printer is recognized, the driver must be working (or?), but something is blocking the system from mounting the printer. Tried the usual things: power of printer, restart Ubuntu etc. Listed below result of lsusb and fstab: hans@kontor-linux:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 04a9:174a Canon, Inc. Bus 002 Device 002: ID 1058:1001 Western Digital Technologies, Inc. External Hard Disk [Elements] Bus 004 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser hans@kontor-linux:~$ sudo cat /etc/fstab [sudo] password for hans: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=eaf3b38d-1c81-4de9-98d4-3834d674ff6e / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=93a667d3-6132-45b5-ad51-1f8a46c5b437 none swap sw 0 0 Here is what I have tried: Tried the HK link again, but no luck so far. However, I connected the printer wirelessly to router on other xp box. Installing the driver from ppa:michael-gruz/canon doesn't work, but the driver is installed

    Read the article

  • apache-memory-hacker-linux

    - by bibhudatta
    When we start the linux system it take only 435mb memory and it is 4GB memory server. When we start the httpd services it take 1000mb and outmatically it take all the memory and the server crase. even we stop the apache just it release 200mb memory. What will be the problem Can any one tell me what these hacker are doing. I see they are goinging some hit to my apache by some but I thing they are doing from this system. Below is the log. Please help me out for this. [root@host ~]# tail -20 /var/log/httpd/dostizone.com-combined.log 180.76.5.143 - - [14/Nov/2011:02:30:16 +0530] "GET /blogs/10248/209403/nfl-panties-since-the-quality-of HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 180.76.5.88 - - [14/Nov/2011:02:30:31 +0530] "GET /blogs/815/158725/new-jersey-attorney-search HTTP/1.1" 403 2290 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 220.181.108.186 - - [14/Nov/2011:02:30:32 +0530] "GET / HTTP/1.1" 403 5043 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:30:20 +0530] "GET /blogs/805/11279/supra-suprano-high-shoes HTTP/1.1" 200 30642 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:37 +0530] "GET /blogs/10514/215084/oakland-raiders-sweatpants-tags HTTP/1.1" 403 2297 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:12 +0530] "GET /profile/8509 HTTP/1.1" 200 236894 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 220.181.94.237 - - [14/Nov/2011:02:30:43 +0530] "GET /mode-switch?return_url=%2Fblogs%2F8529%2F160217%2Fclimate-jordan-6 HTTP/1.1" 302 1 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:44 +0530] "GET /blogs/390/61573/blackhawk-jerseys-from-the-you HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/core.js HTTP/1.1" 200 26869 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Activity/externals/scripts/core.js HTTP/1.1" 200 26873 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/imagezoom/core.js HTTP/1.1" 200 26899 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 180.76.5.153 - - [14/Nov/2011:02:30:50 +0530] "GET /blogs/10252/212268/cleveland-browns-authentic-jerse HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:51 +0530] "GET /blogs/741/46260/chocolate-ugg-women-boots-1873 HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.1.7 - - [14/Nov/2011:02:30:40 +0530] "GET /blogs/682/97454/swarovski-jewellry-sale-articles HTTP/1.1" 200 25770 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:56 +0530] "GET /blogs/779/60941/players-a-to-z-michael-cuddyer HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:01 +0530] "GET /blogs/469/58551/chicago-bears-news-there-exist HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:54 +0530] "GET /blogs/8529/160217/climate-jordan-6 HTTP/1.1" 200 30750 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 180.76.5.59 - - [14/Nov/2011:02:31:05 +0530] "GET /blogs/815/158197/cheap-calgary-flames-jerseys HTTP/1.1" 403 2292 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:06 +0530] "GET /mode-switch?return_url=%2Fblogs%2F387%2F45679%2Fhandbag-louis-vuitton-judy-mm-m4 HTTP/1.1" 403 2258 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:31:10 +0530] "GET /public/temporary/c83b731ecc556d7fd1a7732d9ac16ed6.png HTTP/1.1" 404 2305 "-" "Googlebot-Image/1

    Read the article

  • How to unsquash and mount arch linux live CD

    - by steffen
    I am following this manual to install Arch Linux from within another Linux distro with the help of an Arch Linux live CD. Here is what I did: sudo mount -o loop Downloads/archlinux-2012.11.01-dual.iso arch_iso/ unsquashfs -d squashfs-root/ arch_iso/arch/x86_64/root-image.fs.sfs This results in a directory squashfs-root/ containing one file: root-image.fs I assume that this is not what I want. I want to see something that looks like a Linux root folder. If I follow these steps: "mount the file system" with mount -B /squashfs-root ${livecd_arch} and mount -t proc /proc ${livecd_arch}/proc, I get error messages like: mount: mount point /home/me/arch_root//proc does not exist What am I missing? Thanks!

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Python version priority in OSX/UNIX PATH environment variable

    - by mindthief
    Hi all, I want my system to use /usr/bin/python, but it's currently using /opt/local/bin/python, which points to /usr/bin/python2.6. I tried modifying the PATH variable in my .bashrc as PATH=~/bin:$PATH ...and then set a symbolic link in ~/bin to point to /usr/bin/python. i.e. ~/bin/python --> /usr/bin/python I figured this might prioritize this symlink over the /opt/local version if it came before the other one in the PATH variable, but when I opened a new shell I still found python pointing to /opt/local/bin. Any advice on a good way to get the system to use /usr/bin/python? Also, I usually use ipython as opposed to python directly. I'm assuming that if the system starts to use the correct version of python then ipython would also use that version? If not, how could I also get ipython to use the correct version? Thanks!

    Read the article

  • What should I quote for a project I hope to get a job at the end of?

    - by thesunneversets
    Long story short: I applied for a (CakePHP, MySQL, etc) development job in London, UK. I grew up in Britain but am currently based quite a few thousand miles away in Canada, so I wasn't really expecting success. But quite a few emails and phone interviews later it seems that they really like me. At least to a point. Because such a major relocation would be a horrible thing to go wrong, they've sensibly suggested a trial run of getting me to build a website at a distance. I have the spec for this and it's quite a substantial amount of work. My problem is that I now need to suggest both a fee and a timescale for the job, and I haven't got any significant experience of working as a contractor. Looking at the spec, which is 1500 words of many concisely stated features, some fairly trivial and some moderately involved, I can easily imagine there being 2 weeks of intensive work there. (If everything went really well it might be closer to one week, but even though I want to impress, I definitely don't want to fall into the inexperienced-contractor trap of massively underestimating the amount of time a project will run to.) As an extra complication, there is no expectation that I should give up my day job to get this trial project done, so the hours will have to be clawed from evenings and weekends. I don't want to overcommit to a quick delivery date, only to find myself swiftly burning out due to an unrealistic workload. So, any advice for me? My main question is, what is a realistic hourly figure to demand of a stable but not excessively wealthy London-based company in the current market, bearing in mind that I'd like them to hire me afterwards? But any more general recommendations based on my circumstances above would be much appreciated too. Many thanks!

    Read the article

  • Can defect containment metrics be readily applied at an organizational level when there is only a consistant organizational process framework?

    - by Thomas Owens
    Defect containment metrics, such as total defect containment effectiveness (TDCE) and phase containment effectiveness (PCE), can be used to give a good indicator of the quality of the process. TDCE captures the defects that are captured at some point between requirements and the release of a product into the field, indicating the overall effectiveness of the entire process to find and remove defects. PCE provides more detail at each phase of the software development life cycle and how the defect detection and removal techniques are working. Applying these metrics makes sense at a level where you have a well-defined process and methodology for product development, often a project. However, some organizations provide a process framework that is tailored at the project level. This process framework would include the necessary guidance for meeting certifications (ISO9001, CMMI), practices for incorporating known good techniques (agile methods, Lean, Six Sigma), and requirements for legal or regulatory reasons. However, the specific details of how to gather requirements, design the system, produce the software, conduct test, and release are left to the product development teams. Is there any effective way to apply defect containment metrics at an organizational level when only a process framework exists at the organizational level? If not, what might be some ideas for metrics that can be distilled from each project (each using a tailored process that fits into the organizational process framework) that captures defect containment metrics to discuss the ability of the process to find and remove defects? The end goal of such a metric would be to consolidate the defect containment practices of a large number of ongoing projects and report to management. The target audience would be people in roles such as the chief software engineer and the chief engineer (of all engineering disciplines) for the organization. Although project specific data would be available, the idea is to produce something that quantifies the general effectiveness of all tailored processes across all ongoing projects. I would suspect that this data would also be presented as part of CMMI, ISO, or similar audits to demonstrate process quality.

    Read the article

  • How can I force the display of image "handles" in Microsoft Word 2010?

    - by Matt
    In order to select images in Microsoft Word documents you need to get the cursor just right so that it turns into the "+" arrow icon, at which point you can click to select the image. When your cursor is not in exactly the right spot you see something like this (note that the letter "m" shown in the picture is an image, not a font): When your cursor is in an appropriate spot you see something like this: For simple images with relatively straight and simple borders, it's easy; you hover over the image and you get the "+" arrow. But for smaller, more intricate images with many sides, thin borders or perhaps transparency it's often madness as you move your cursor all over the image struggling to find the teenie little spot that Word deems is selectable. Is there some means of enabling the display of "handles" (maybe wrong term) around images before you select them, so you can see the selectable spots without hunting and pecking for them?

    Read the article

  • Remote Access to MSSQL Database From 1&1 Hosting [duplicate]

    - by Zerkey
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I just paid ($6 /month) for shared Windows hosting through 1&1 hosting. I was having trouble connecting to my database from home, so I sent an email to support. I received the following response: As we checked your concern here in our end, please be advised that due to limitation of Shared Hosting services, there is no option to connect the database to your SQL Management Studio or through Visual Studio. It is only possible for Dedicated Server package. You may only access the database using MyLittleAdmin at the Control Panel. A dedicated server is like $200 per month! What is the point of having database access only through a web console? I feel I am missing something here, or maybe the support agent is. Is there a way to access my MS SQL database on their servers through Visual Studio or SQL Management Studio from my machine? If not, is there a web host who allows this for less than $200 a month? EDIT: Marked as duplicate... I'm not asking for a list of web hosts, I'm asking how to remotely connect to my MSSQL database through 1&1's services.

    Read the article

  • VirtualBox 3.2 Release

    - by [email protected]
    The latest version of VirtualBox is out - version 3.2.  It is the first release as Oracle VirtualBox and there are a lot of new features.  Many of these I see directly impacting the Oracle VDI solution in upcoming releases (just my guess, of course), and I am updating my notebook as I write this. Er... OK - Done!There are enough features that they warrant you taking a look at this two-page VirtualBox Community Bulletin.pdf.  No point in me restating them.If you and your organization haven't tried VirtualBox, or you haven't looked at it in a while, you owe it to yourself to give this a run.  This is small, simple, powerful, software that allows you to do way more than most people would ever need in hosting a Virtual machine on you local machine.  I routinely will do a demo on a two-year-old Macbook running OS X locally, plus a Solaris 10 VM running the Sun Ray server, and a Windows XP VM and hang a couple Sun Rays off of it - and the performance is stellar.You can subscribe to the mailing lists and get access to the Beta releases as they come out as well, if you are into 'bleeding edge'.40,000 downloads a day is the current rate (before this new release), but it will jump for sure now.  Might as well join in!  

    Read the article

  • Designing application flow

    - by Umesh Awasthi
    I am creating a web application in java where I need to mock the following flow. When user trigger a certain process (add product to cart), I need to pass through following steps Need to see in HTTP Session if user is logged in. Check HTTP Session if shopping cart is there If user exist in HTTP Session and his/her cart do not exist in HTTP Session Get user cart from the database. add item to cart and save it to HTTP session and update cart in DB. If cart does not exist in DB, create new cart and and save in it HTTP Session. Though I missed a lot of use cases here (do not want question length to increase a lot), but most of the flow will be same as I described in above steps. My flow will start from the Controller and will go towards Service Layer and than ends up in the DAO layer. Since there will be a lot of use cases where I need to check HTTP session and based on that need to call Service layer, I was planning to add a Facade layer which should be responsible to do this for me like checking Session and interacting with Service layer. Please suggest if this is a valid approach or any other best approach can be implemented here? One more point where I am confused is how to handle HTTP session in facade layer? do I need to pass HTTP session object each time I call my Facade or any other approach can be used here?

    Read the article

  • I spoke at SQL Saturday #77 and all I got was this really awesome speaker's shirt!

    - by Most Valuable Yak (Rob Volk)
    Yeah, it was 2 weeks ago, but I'm finally blogging about something! I presented Revenge: The SQL! at SQL Saturday #77 in Pensacola on June 4.  The session abstract is here, and you can download the slides from that page too.  You can see how I look in the speaker's shirt here. Overall it went pretty well.  I discovered a new bit of evil just that morning and in a carefully considered, agonizing decision-making process that was full documented, tested, and approved…nah, I just went ahead and added it at the last minute.  Which worked out even better than (not) planned, since it screwed me up a bit and made my point perfectly.  I had a few fans in the audience, and one of them recorded it for blackmail material posterity. I'd like to thank Karla Landrum (blog | twitter) and all the volunteers for putting together such a great event, and for being kind enough to let me present. (Note to Karla: I'll get the next $100 to you as soon as I can.  Might need a few extra days on the next $100.) Thanks to Audrey (blog | twitter), Peg, and Dorothy for attending and keeping the heckling down.  Thanks also to Aaron (blog | twitter) for providing room and board and also not heckling.  Thanks to Julie (blog | twitter) for coming up with the title for the presentation.  (boo to Julie for getting sick and bailing out on us)  And thanks to all of them for listening to a preview and offering their suggestions and advice! Cross your fingers that I get accepted at SQL Saturday 81 in Birmingham, SQL Saturday 85 in Orlando, or SQL Saturday 89 in Atlanta, or just attend them anyway!

    Read the article

  • mod_fcgid process doesn't respawn

    - by aaronsw
    I have a Python script running on my server as a FastCGI using Apache2 and mod_fcgid. I let it spawn up to five processes. But I soon get messages like these in the Apache logs: [Wed Sep 02 23:16:34 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function [Wed Sep 02 23:16:35 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function and then Apache doesn't seem to recognize that all its processes are dead (I have a max of 5 backends) and refuses to spawn new ones: [Wed Sep 02 23:26:16 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request [Wed Sep 02 23:26:17 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request at which point it refuses to respond to requests from the outside world. This doesn't seem to happen with my other FastCGIs, which all use the same Apache config: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi IPCConnectTimeout 20 MaxProcessCount 5 DefaultMaxClassProcessCount 2 DefaultMinClassProcessCount 1 </IfModule> Any idea what causes it?

    Read the article

  • No Editor in Directory Utility

    - by Christian Macht
    I'm on OSX 10.6.8 and would like to enable the root user. I found Apple's KB article, however this point Click the lock in the Directory Utility window. Enter an administrator account name and password, then click OK. Choose Enable Root User from the Edit menu. is not possible, because there is no "Edit Menu" Icon! I'm logged in as admin, but I only see "Services" and "Search Policy". Repeat, I do not see the "Editor", it's just not there. What gives? How it should be: Directory Utility

    Read the article

  • Extend my LAN network

    - by user268291
    i have a patch panel and hardwiring already setup in my office. The patch panel is 24-port and all the ports are engaged. All my switch ports are engaged. Now, I have a printer connected to a wall point (RJ45) which I want to shift to a new room where there is no LAN setup. I want to have two RJ45 wall plugs in the new room so that I can connect one RJ45 port to have my LAN in the new room and the other RJ45 wall port I want to use for my printer. There is no option other than LAN (no wireless). Please help me and tell me how do I get the two RJ45 wall plate plugs in the new room up and keep my LAN running. It is a little urgent for me. Please help.

    Read the article

  • Extract cert and private key from JKS keystore to use it in Apache2 httpd

    - by momo
    I tried to find this but no luck. I created a JKS keystore and generated a CSR, then imported the signed cert and intermediate and root CA certs. Used this keystore on Tomcat without problems. Now I want to use the same cert for Apache2 http server on the same machine. I actually want to set up mod_jk to redirect /*.jsp and servlets paths to Tomcat and serve the static content and PHP from Apache2. I tried to convert JKS to PKCS12 with keytool to afterwards handle it with openssl with a command like this: keytool -importkeystore -srckeystore foo.jks \ -destkeystore foo.p12 \ -srcstoretype jks \ -deststoretype pkcs12 The problem is only the cert is exported but not the rest of the chain. I actually used this keystore on Apache and it complained about key and cert don't matching (not sure if it's related to the chain or not). Can anyone point me on the right direction? I am not a server guy and I am kinda lost with all this things :-(

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >