Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 517/701 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Can one really fry a monitor by setting the wrong HorizSync and VertRefresh?

    - by rumtscho
    I've encountered this problem on several different systems with several different monitors: a monitor functions perfectly under Windows. I install a Linux and the max resolution is at some impossibly low value, mostly 640x480, changing it in Xorg.conf doesn't work. The X.org log file then shows that the driver cannot determine the correct refresh rate for the monitor, so it ignores everything in Xorg.conf and just loads in some default minimalistic mode. Googling the problem leads to an easy solution: set the HorizSync and VertRefresh in Xorg.conf, and everything works. The problem seems to be a common one, and I've seen dozens of results recommending the solution. Each of them contains the warning that you should use the value ranges provided with the monitor. Because if you don't, and your video card sends a signal with the wrong refresh rate, this can damage your monitor. Of course, you don't have a user manual for your monitor any more. If you are lucky to find one on the attic or on the net, it doesn't contain any information about the supported refresh rate. So you just type in the value suggested in the solution description, which varies wildly depending on your source, and cross your fingers. You restart, and... ... you've set the wrong values. So the monitor shows a short message like "input signal out of range", and you do a hard restart, repair your Xorg.conf in recovery mode, and everything is fine, including your monitor. So does this warning reflect a real possibility, or is it just a geeky urban legend? Or is it something which used to happen in the past, before manufacturers started protecting the monitors against it? Is it technically possible with every monitor technology, or is it maybe something which can only happen to a CRT? If you think that it's true, why? Have you ever witnessed a monitor die from the wrong refresh config, or have you read of it in a reputable source?

    Read the article

  • An easily customizable linux distribution using minimal disk space?

    - by Frank
    I'm looking for a linux distribution that can be easily used to create my own distribution that's the same system with some software installed. So basically I should be able to create an iso which, when installed, will have the linux distribution with my desired installed. More specifically, I plan on installing mysql and a bit of my own software which shouldn't be too big. However, this distribution needs to be extremely small in terms of disk space. The distribution, including mysql should not exceed 100mb. It should, of course still be able to connect to the internet and perform other standard functions. I don't need X/any sort of window manager, and would prefer not to have it since it would increase disk usage. Currently I have tried ttylinux and tiny core linux. I've found that ttylinux, while is extremely small, has almost nothing so that mysql can't even be installed. Tiny core linux, on the other hand is a bit too big. I've found openembedded and linux from scratch, but I would prefer for the install and build process to be much easier. What other distribution would you recommend for my purposes? Minimizing disk usage is the most important, followed by ease of installing and creating the custom distribution.

    Read the article

  • The bottlenecks of any computer, what to look for?

    - by WebDevHobo
    Whether it is a laptop or a desktop, any computer is made up of several pieces of hardware that communicate with each other. Sending data back and forth to ensure that the user gets the desired results. I have seen some theoretical stuff on computers & hardware, but I wonder how it all comes together. CPU RAM Graphics Card L1 CACHE L2 CACHE L3 CACHE FSB ... And all other things. Which is the biggest bottle neck? Why would a person not want/need a big value in one of those categories in certain situations? P.S.: when reading the specs of the i5 750 processor, I came across this description: In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. What is this, and how does it compare to FSB? EDIT: I am not planning to buy a computer at all. The goal of this question is to understand the internal relation of various hardware pieces, their specific functions and how they work together. For instance, I have heard to a somewhat higher-than-usual amount of L2/L3 Cache can help speed up your computer. What's up with saying that? Also I forgot to mention Hard-disk RPM.

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • how does a computer know which IP address will route information to the internet? [closed]

    - by JohnMerlino
    Possible Duplicate: How does IPv4 Subnetting Work? For example, I have a computer with a Network Inteface Card (NIC) which is an Ethernet card that is connected by Ethernet cables to a router. There is also another computer with a cable that is connected in another port of the router. This is a Belkin router operating over an Ethernet in the LAN. When I connect to serverfault.com, it maps to an IP address. My computer now has a task of connecting to that IP address. But my computer itself cannot connect to the serverfault IP address. Only the router can. So the task of my computer is to find the IP address associated with the node that will do the routing to the public internet. How does my computer know that a particular IP address in the local network belongs to the router, and is not another computer connected to the network? Is this information configured manually in the operating system itself? Somehow my computer must know that it must send ethernet frames to the router with the expectation that the router will then send the packet to a public IP. How does it know to send it to the router? Is the router's ip address stored in my computer like a key/value pair e.g. "router"="192.168.2.6", so that when I put a public ip address, my computer first knows to connect to 192.168.2.6?

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • Kernel Logging disabled?

    - by Tiffany Walker
    uname -a Linux host 2.6.32-279.9.1.el6.i686 #1 SMP Tue Sep 25 20:26:47 UTC 2012 i686 i686 i386 GNU/Linux And start ups: ls /etc/init.d/ abrt-ccpp certmonger dovecot irqbalance matahari-broker mdmonitor nfs proftpd rpcbind single ypbind abrtd cgconfig functions kdump matahari-host messagebus nfslock psacct rpcgssd smartd abrt-oops cgred haldaemon killall matahari-network mysqld ntpd qpidd rpcidmapd sshd acpid cpuspeed halt ktune matahari-rpc named ntpdate quota_nld rpcsvcgssd sssd atd crond httpd lfd ma tahari-service netconsole oddjobd rdisc rsyslog sysstat auditd csf ip6tables lvm2-lvmetad matahari-sysconfig netfs portreserve restorecond sandbox tuned autofs cups iptables lvm2-monitor matahari-sysconfig-console network postfix rngd saslauthd udev-post But when I installed CSF/LFD I am getting nothing. LFD does not create lfd.log and nor are any blocks being logged in /var/log/messages either from the firewall. This is not natural. I looked for klogd but maybe I am looking in the wrong place for it to see if it is enabled? ls /etc/init.d/syslog ls: cannot access /etc/init.d/syslog: No such file or directory Also noticed no syslog? Also noticed this: csf -d 84.113.21.201 Adding 84.113.21.201 to csf.deny and iptables DROP... iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. I've never seen this before and this is a dedicated box. Also: ./csftest.pl Testing ip_tables/iptable_filter...OK Testing ipt_LOG...OK Testing ipt_multiport/xt_multiport...OK Testing ipt_REJECT...OK Testing ipt_state/xt_state...OK Testing ipt_limit/xt_limit...OK Testing ipt_recent...OK Testing xt_connlimit...OK Testing ipt_owner/xt_owner...OK Testing iptable_nat/ipt_REDIRECT...OK Testing iptable_nat/ipt_DNAT...OK RESULT: csf should function on this server iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Error when starting qpidd as a service

    - by Sparks
    I have recently swapped from CENTOS 5 to FEDORA 17. Previously I have created my own init.d scripts successfully (albeit not for qpidd) however, in FEDORA I cannot get it to work. I have created the following script (called qpidd) in the init.d directory: #!/bin/bash # # /etc/rc.d/init.d/qpidd # # QPID/AMQP Broker scripts # # # chkconfig: 2345 20 80 # description: QPID/AMQP Broker service # processname: qpidd # pidfile: /var/lock/subsys/qpidd # Source function library. . /etc/init.d/functions SERVICENAME=qpidd start() { echo -n "Starting $SERVICENAME: " daemon qpidd -d & retval=$? touch /var/lock/subsys/$SERVICENAME return $retval } stop() { echo -n "Shutting down $SERVICENAME: " qpidd -q & retval=$? rm -f /var/lock/subsys/$SERVICENAME return $retval } case "$1" in start) start ;; stop) stop ;; status) status qpidd ;; restart) stop start ;; condrestart) [ -f /var/lock/subsys/<service> ] && restart || : ;; *) echo "Usage: $SERVICENAME {start|stop|status|restart" exit 1 ;; esac exit $? After this, I ran chkconfig --add qpidd, however, now when I run sudo service qpidd start I get the following message: Starting qpidd (via systemctl): Job failed. See system journal and 'systemctl status' for details. If I then run systemctl status qpidd I get the following message: Failed to issue method call: Unit name qpidd is not valid. I am now lost, I have search the web and Stack Overflow but cannot find anybody with similar problem, any help or direction to a website that can help would be much appreciated Sparks :)

    Read the article

  • Google Chrome sync: limit for bookmarks & extensions?

    - by Lyubomyr Shaydariv
    Actually, Chrome is my favorite web-browser, and one of its most powerful features is synchronizing the actual data into a Google account. For the last years I gained a lot of bookmarks and from time to time browse the extensions gallery to find new valuable ones. Really, synchronizing between my work and home PC's freed me from manual sync. And for the recent months I experience strange glitches. I guess it may be caused by a lot of stored bookmarks (potentially about 3K [in estimate], but please don't ask why :)) and extensions (about 130 installed but only 10-15 daily used). I can mention the following strange things: Recently added bookmarks sometimes are not synchronized (e.g. I put a bookmark at work, but it's not guaranteed I can see it that evening), despite about:sync indicates a good sync process. Sometimes recently modified bookmarks appear in either (let's call) last at home or last at work bookmark folders. Sometimes bookmarks are not synced at all. (Moreover, Chromium versions may even crash) Extensions are not synced now at all. Perhaps, there's another reason, but Google Mail Checker and Google Reader Notifier do not show indicators of incoming e-mails and news. ... I'm not sure but it looks like I might exceed Chrome internal sync limits... Is it right? Are there any workarounds, or should I make a massive bookmarks/extensions cleanup (I really don't want it :()? I mostly use Google Chrome Canary builds, and the my current one is 12.0.732.0. Thanks in advance. Update #1 (2001-04-19): I removed about 50 extensions that I'm not interested in (or that I consider as trash), and gained pretty some results: The extensions count is below 100 (exactly 97); The chrome://extensions page does not get slow (or even frozen) any more on enabling/disabling/uninstalling extensions; The extensions are seem to be synchronized now again.

    Read the article

  • Restore passwd for root on a server

    - by s.mihai
    Hello,       I have a DVR server with linux embeded. It has some telnet functions but i don't have the password for it (the chinese manufacturer refuses to give me the password). I did get a upgrade folder from them and found a passwd file inside.       So i assume that when i upgrade the firmware the password in that file will be used.       Now i am trying to modify the file so taht i can insert a password i already know.       The problem is that i don't know how to create the password hash from what i figured the password hash is $1$1/lfbDKX$Hmd.FqzB8IZEohPesYi961       The file is named rom.ko and i found a command telnetd /mnt/yaffs/web/boa -c /mnt/yaffs/web & /bin/cp -f /mnt/yaffs/rom.ko /etc/shadow in a script file so i assume this is the right way.       Can you help me reconstruct a password that i know already? Tell me how or make one for me :) ?... passwd file: root:$1$1/lfbDKX$Hmd.FqzB8IZEohPesYi961:0:0:99999:7:-1:-1:33637592 bin::10897:0:99999:7::: daemon::10897:0:99999:7::: adm::10897:0:99999:7::: lp::10897:0:99999:7::: sync::10897:0:99999:7::: shutdown::10897:0:99999:7::: halt::10897:0:99999:7::: mail::10897:0:99999:7::: news::10897:0:99999:7::: uucp::10897:0:99999:7::: operator::10897:0:99999:7::: games::10897:0:99999:7::: gopher::10897:0:99999:7::: ftp::10897:0:99999:7::: nobody::10897:0:99999:7::: next::11702:0:99999:7:::

    Read the article

  • Is there a free XML viewer to do "pivot tables"

    - by FumbleFingers
    I have an *.xml file on a PC running Windows XP, with a structure something like... <movies id="my movies" <movie name="Unforgiven" <people star="Clint Eastwood" director="Clint Eastwood"/> </movie> <movie name="A Fistful of Dollars" <people star="Clint Eastwood" director="Sergio Leone"/> </movie> <movie name="J Edgar" <people star="Leonardo DiCaprio" director="Clint Eastwood"/> </movie> </movies> I want to open this file in a viewer utility which will show one line per movie, filtered by a condition such as director="Clint Eastwood", including the relevant value for movie name on each line. Note - that's just an example. My actual file has thousands of lines, each possibly several hundred bytes long, and there are many more levels and named values. The important thing is I want to apply a filter for a specified value of some field at some level. And I want to see one line for every case where that value occurs, showing the values for any fields I specify, even if they're stored at higher levels. I may be mistaken in saying what I want is a "pivot table" - I don't know what I should call it.

    Read the article

  • Effective backup and archive strategy for database and linked files

    - by busyspin
    I am using Postgres to store a variety of application data for a webapp. Part of the application involves storing and retrieving user uploaded files. I am storing the files in the filesystem with some associated metadata in the database. I am trying to come up with a backup and archive strategy so that I can effectively backup and archive/restore the database and the linked files. Here are the things I want to accomplish. Perform routine backups that can be used for recovery from failures and which include all DB data and the linked files. Ideally, this backup would be done while the app is running. Live backup is certainly possible with a DB but I am not sure how to keep the linked files consistent with the database during the backup process Archive chunks of data as they become "old". These chunks must includes the database data plus any linked files. It should be possible to put the archived data back into production again. It would be ideal if it were easy to determine which ranges of objects were stored in each chunk. Do you have any advice for how to accomplish these goals? If the files were in the database as BLOBS these tasks would be much easier since normal database backup and restore functionality would handle this. I am not sure how to accomplish the same thing when file data is linked to database rows.

    Read the article

  • Can't install MailParse on cpanel server

    - by Tom
    Hi, I've got a linux vps running CentOs 5.5 (cpanel/whm), I've installed MailParse via Module Installers section on whm, and it did install it, the end of setup log: running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-mailparse-2.1.5" install Installing shared extensions: /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-mailparse-2.1.5" | xargs ls -dils 508718 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5 508745 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr 508746 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib 508747 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php 508748 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions 508749 4 drwxr-xr-x 2 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626 508744 196 -rwxr-xr-x 1 root root 193502 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' install ok: channel://pecl.php.net/mailparse-2.1.5 Extension mailparse enabled in php.ini The mailparse.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Now, when i try to use mailparse functions using php i get the following error: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 What should i do?

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • Alternative software for Pinnacle PCTV 100e

    - by Stijn Sanders
    I have a Pinnacle PCTV 100e external USB cable television receiver. I've been using Pinnacle's software that came with the card (TVCenter Pro) to record things at given times. Things I don't like is an extremely high CPU load, and that it doesn't seem to halt the screensaver from running when watching in full screen. Also, I was away the last two weeks, and the schedules went terribly bust. Some items were recorded hours before or after the actual scheduled time (and now I missed some shows), and some recurring schedules weren't converted into the next occurrence correctly! Is there good alternative software that would work with my PCTV 100e? (Preferalby cheap or free) I've tried VLC Player, which gets video, but no audio. I've tried MediaPortal, which crashes when trying to scan for channels. When I select a channel manually, the stored mpg has big errors in encoding and is also missing audio. There's VirtualDub, but that doesn't have ready-made scheduled-recording options. This I can conjure some scheduled scripts for, but I've noticed the sync gets awfully wrong after some time. I've tried Windows Media Center, but it doesn't seem to support the PCTV 100e.

    Read the article

  • Epson Artisan 800 on Ubuntu/Linux

    - by Tim Lytle
    Update for Ubuntu 10.04: Printing should work 'out-of-box', scanning still needs the newer sane backend. Looking for a known good way to setup an Epson Artisan 800 on Ubuntu specifically or any linux box in general. It is a printer/scanner with ethernet/wifi/usb. I'd like to use it as a network printer/scanner being able to do both from my Windows and Ubuntu machines; however, if it needs to be physically connected to a computer (preferably the Ubuntu machine) that is doable (again, then sharing print/scan functions to the network). Basically, I'm looking for someone who has used this printer/scanner (or similar) in a multi-platform environment to share how the set it up and how well it worked. Updated: A little more information, like most printers (I expect) the documentation for the printer basically says, "don't use plug-n-play, run our setup CD from your Windows/Mac system", to do anything (set it up for network use even). I guess that's to make it easy for anyone else to setup, but when you're looking to use it with an unsupported (by Epson's documentation) OS, you're just stuck on your own. What I was hoping for was someone who could say, "Forget the bundled software, do [this] to set it up on wifi manually, install [this] to connect to the scanner from [os], printing works with [this] driver - at least that's how I set it up." I'll will (and have so far) use the information here, and post my own setup when I'm done, if there's no one else out there with that experience.

    Read the article

  • Merge changes in Microsoft Word documents

    - by Álvaro G. Vicario
    I'm using Microsoft Word 2002 to maintain some documentation. The documents are stored in a version control repository (Subversion) together with the source code it documents. My Subversion client (TortoiseSVN) comes with a little VBA script that allows to leverage the built-in revisions feature when merging different branches. In other words, when I want to copy changes from one document to another, Word compares both documents (source and target) and builds a third document that has the contents of the source doc tagged as revisions, so I can then review differences one by one and confirm or discard changes. While this is handy, it also means that making a single change to the source document forces me to review all the differences between both documents and discard all of them except the only actual change. My questions is... Do you know about an application or plug-in that's able to find the differences between two Word documents and apply those differences to a third document? (I know 2002 is very old but that's what my company gives me; I'm open to solutions that use newer versions though.)

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • Easy_install the wrong version of python modules (Mac OS)

    - by user73250
    I installed Python 2.7 on my Mac. When typing "python" in terminal, it shows: $ python Python 2.7 (r27:82508, Jul 3 2010, 20:17:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. The Python version is correct here. But when I try to easy_install some modules. The system will install the modules with python version 2.6 which are not able be imported to Python 2.7. And of course I can not do the functions I need in my code. Here's an example of easy_install graphy: $ easy_install graphy Searching for graphy Reading pypi.python.org/simple/graphy/ Reading http://code.Google.com/p/graphy/ Best match: Graphy 1.0.0 Downloading http://pypi.python.org/packages/source/G/Graphy/Graphy- 1.0.0.tar.gz#md5=390b4f9194d81d0590abac90c8b717e0 Processing Graphy-1.0.0.tar.gz Running Graphy-1.0.0/setup.py -q bdist_egg --dist-dir /var/folders/fH/fHwdy4WtHZOBytkg1nOv9E+++TI/-Tmp-/easy_install-cFL53r/Graphy-1.0.0/egg-dist-tmp-YtDCZU warning: no files found matching '*.tmpl' under directory 'graphy' warning: no files found matching '*.txt' under directory 'graphy' warning: no files found matching '*.h' under directory 'graphy' warning: no previously-included files matching '*.pyc' found under directory '.' warning: no previously-included files matching '*~' found under directory '.' warning: no previously-included files matching '*.aux' found under directory '.' zip_safe flag not set; analyzing archive contents... graphy.all_tests: module references __file__ Adding Graphy 1.0.0 to easy-install.pth file Installed /Library/Python/2.6/site-packages/Graphy-1.0.0-py2.6.egg Processing dependencies for graphy Finished processing dependencies for graphy So it installs graphy for Python 2.6. Can someone help me with it? I just want to set my default easy_install Python version to 2.7.

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • MySQL Error: FUNCTION LEVENSHTEIN already exists

    - by kgrote
    I've got an ExpressionEngine database and I exported a couple of tables from it, then dropped those tables. When I try to re-import the tables in PHPMyAdmin, I get this error: SQL query: -- -- Database: `my_db` -- DELIMITER $$ -- -- Functions -- CREATE DEFINER=`my_username`@`%` FUNCTION `LEVENSHTEIN`(s1 VARCHAR(255), s2 VARCHAR(255)) RETURNS int(11) DETERMINISTIC BEGIN DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT; DECLARE s1_char CHAR; DECLARE cv0, cv1 VARBINARY(256); SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0; IF s1 = s2 THEN RETURN 0; ELSEIF s1_len = 0 THEN RETURN s2_len; ELSEIF s2_len = 0 THEN RETURN s1_len; ELSE WHILE j <= s2_len DO SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1; END WHILE; WHILE i <= s1_len DO SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1; WHILE j <= s2_len DO SET c = c + 1; IF s1_char = SUBSTRING(s2, j, 1) THEN SET cost = 0; ELSE SET cost = 1; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost; IF c > c_temp THEN SET c = [...] MySQL said: Documentation #1304 - FUNCTION LEVENSHTEIN already exists I get this error even if I drop all tables from the DB and try to import anything. The only way I can get the error to go away is to totally delete the database and re-create it. What's causing that error and how can I stop it from happening?

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • Do entries in local 'hosts' files override both forward and reverse name lookups?

    - by Murali Suriar
    If I have the following entries in a hosts file: 192.168.100.1 bugs 192.168.100.2 daffy.example.com 192.168.100.3 elmer.example.com. Will IP-name resolution attempts by local utilies (I assume using 'gethostbyaddr' or the Windows equivalent) honour these entries? Is this behaviour configurable? How does it vary between operating systems? Does it matter whether the 'hosts' file entries are fully qualified or not? EDIT: In response to Russell, my test Linux system is running RHEL 4. My /etc/nsswitch.conf contains the following 'hosts' line: hosts: files dns nis If I ping any of my hosts by name (e.g. bugs, daffy), the forward resolution works correctly. If I traceroute any of them by IP address, the reverse lookup functions as expected. However, if I ping them by IP, ping doesn't appear to resolve their host names. My understanding was that Linux ping would always attempt to resolve IPs to names unless instructed otherwise. Why would traceroute be able to handle reverse lookups in hosts files, but ping not?

    Read the article

  • DriveImage XML fails with a Windows Volume Shadow Service Error

    - by ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, or otherwise (even COM or Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >