Search Results

Search found 3551 results on 143 pages for 'canonical sources'.

Page 109/143 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Our application is responding different on client server

    - by WtFudgE
    I have an application running on our local server and it works perfect. However when I deploy my application to our client's server there are problems. If i copy all the sources to another local from their server, it works fine again.... I am talking about an ASP.NET application talking with windows server 2008. The server specs of our client are: intel Xeon 2.13 GHz 6 GB RAM 300 GB HDD windows server 2008 R2 Although I think it's not that important I will describe the problems I am talking about to give u a better idea: It is related to upating and deleting fields in the sql server. Whenever there is more than one user running the application, and updates or deletes (not even the same record!0, there seems to be fields updating/deleting wrongly. But as I said before, if we copy the source to another server it is not the case. So I assumed it is more configuration related. Do you guys have any idea what the issue could be? Thanks a lot EDIT: my client old me he disabled the sql related accounts Could this be related?

    Read the article

  • Advice on networking career

    - by fmysky
    Hello! I need some insights on a networking career. I have a valid CCNA and few months of experience as a Jr. network analyst. My focus is a type of job where I can administer networking/storage hardware and possibly managing servers/workstations too. I am new to this job area, specifically new to city like LA. I am currently unemployed and so, my question is: should I continue with my cisco certifications(routing/wireless) or other comptia certs or both to get a reliable job? I am really interested in CCNA wireless but watching craigslist(LA) and other job sites, it seems people need more cisco voice people. While, some others also ask for Net+ certs. I have scarce financial sources so, its better to make some good decisions. I have not been applying yet due to some personal problems but I will soon. Thank you! P.S. I don't know if I can ask questions like this here. Sorry about that.

    Read the article

  • How can MySQL be in GDAL's dependencies when it's already installed?

    - by Julien Fouilhé
    I'm trying to install GDAL on my CentOS 64 bits server to be able to make some GIS operations. I tried a simple: # yum install gdal First, the GDAL version is 1.4 (the last released one is 1.9) Then, I see in the dependencies list mysql. But I have mysql already installed, from another repository (remi), with a newer version than the one suggested by yum... Is it a problem of architecture (yum suggests i386)? I risked a yes, but still impossible to install it! Here's the error I have. Transaction Check Error: package mysql-5.5.28-1.el5.remi.x86_64 (which is newer than mysql-5.0.95-1.el5_7.1.i386) is already installed Then, I tried to install it from sources with last version available (1.9.2). I downloaded the GDAL tar.gz, extracted the files and installed it like following: # tar -xzf gdal-1.9.2.tar.gz # ./configure --with-static-proj4=/usr/local/lib --with-threads --with-libtiff=internal --with-geotiff=internal --with-jpeg=internal --with-gif=internal --with-png=internal --with-libz=internal # make # make install But during the make, I have some strange errors displaying, about RegisterOGRMySQL, that I can't understand: chmod a+x gdal-config /bin/sh /home/benjamin/gdal-1.9.2/libtool --mode=link g++ gdalinfo.lo /home/benjamin/gdal-1.9.2/libgdal.la -o gdalinfo libtool: link: g++ .libs/gdalinfo.o -o .libs/gdalinfo /home/benjamin/gdal-1.9.2/.libs/libgdal.so -L/usr/local/lib/lib -L/usr/kerberos/lib64 -lproj -lsqlite3 /usr/lib64/libexpat.so -lpthread -lrt -lcurl -ldl -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lidn -lssl -lcrypto -lz -Wl,-rpath -Wl,/usr/local/lib -Wl,-rpath -Wl,/usr/lib64 /home/benjamin/gdal-1.9.2/.libs/libgdal.so: undefined reference to `RegisterOGRMySQL' collect2: ld returned 1 exit status make[1]: *** [gdalinfo] Error 1 make[1]: Leaving directory `/home/benjamin/gdal-1.9.2/apps' make: *** [apps-target] Error 2 Has anyone a solution? Thanks a lot!

    Read the article

  • How could Google Latitude find my exact PC location with no GPS or public wifi?

    - by Mike
    I found a similar question here but I still don't get it. You see, I live in a small town and every time I check my IP location via online services or speed test websites, my location appears to be my ISP server location (which in my case is 250 miles away). But when I tried Google latitude, it pinpointed my exact location within less than 100 meters! I use Windows Vista, Google Chrome, and when I got the message that "Google is trying to locate you", I agreed just to check what the result will be. It was scary, very scary! What I've come up after reading the above link is that Google have a kind of extensive WiFi database locations. That could be understandable with the case of public and open WiFis that are used with a lot of people. Some of them might be using applications that could gather location data and somehow this information ends up in giant Google databases. From those, Google could pinpoint a WiFi location based on its MAC address along with these bits of info that have been gathered via various sources. The issue here is that my WiFi is private, I don't even broadcast my WiFi name. So how on earth did Google find my exact PC location? Please break down the answer in layman's terms as possible.

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine?

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • Pasting formatted Excel range into Outlook message

    - by Steph
    Hi everyone, I am using Office 2007 and I would like to use VBA to paste a range of formatted Excel cells into an Outlook message and then mail the message. In the following code (that I lifted from various sources), it runs without error and then sends an empty message... the paste does not work. Can anyone see the problem and better yet, help with a solution? Thanks, -Steph Sub SendMessage(SubjectText As String, Importance As OlImportance) Dim objOutlook As Outlook.Application Dim objOutlookMsg As Outlook.MailItem Dim objOutlookRecip As Outlook.Recipient Dim objOutlookAttach As Outlook.Attachment Dim iAddr As Integer, Col As Integer, SendLink As Boolean 'Dim Doc As Word.Document, wdRn As Word.Range Dim Doc As Object, wdRn As Object ' Create the Outlook session. Set objOutlook = CreateObject("Outlook.Application") ' Create the message. Set objOutlookMsg = objOutlook.CreateItem(olMailItem) Set Doc = objOutlookMsg.GetInspector.WordEditor 'Set Doc = objOutlookMsg.ActiveInspector.WordEditor Set wdRn = Doc.Range wdRn.Paste Set objOutlookRecip = objOutlookMsg.Recipients.Add("[email protected]") objOutlookRecip.Type = 1 objOutlookMsg.Subject = SubjectText objOutlookMsg.Importance = Importance With objOutlookMsg For Each objOutlookRecip In .Recipients objOutlookRecip.Resolve ' Set the Subject, Body, and Importance of the message. '.Subject = "Coverage Requests" 'objDrafts.GetFromClipboard Next .Send End With Set objOutlookMsg = Nothing Set objOutlook = Nothing End Sub

    Read the article

  • How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid)

    - by Svyatoslavik
    How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid) Problem that lucid have old ImageMagic version(6.5.2) Its very important because me need work with SVG grafics, In my local pc I have ubuntu 11.04 and ImageMagic 6.6.2 and all work fine, In server I have 6.5... and I have problem. Reinstall ubuntu to 11.* this is no solution. I tried change /etc/apt/source.list from ubuntu 10.04 (lucid) to list from ubuntu 11.04 (natty) and update ImageMagic. After this action I have ImageMagic 6.6.2 (I looked phpinfo()) but ImageMagick is not work now. If I try do any action I get error: [error] 8996#0: *19983 FastCGI sent in stderr: "PHP Fatal error: Uncaught exception 'ImagickException' with message 'no decode delegate for this image format `/tmp/magick-XXnYKWKC' @ error/constitute.c/ReadImage/532' How it fix? Or how return to old version imagemagick? Problem if I try install from sources: /tmp/image/ImageMagick-6.7.2-7# ./configure configuring ImageMagick 6.7.2-7 checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking whether build environment is sane... yes checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/tmp/image/ImageMagick-6.7.2-7': configure: error: C compiler cannot create executables See `config.log' for more details /tmp/image/ImageMagick-6.7.2-7#

    Read the article

  • Multiple cable adapter setup not working - VGA to smartphone. All cables tested and work

    - by Christopher Rucinski
    Issue Pictured overhead projector setup does not work. #1 - #2 - #3 - Phone. All cables are tested and work! The issue is the HDMI connection between cable #2 and #3. With all other cables, the screen will automatically be displayed onto the projector screen. No extra work needed. With the pictured setup, the smartphone screen is not displayed onto the projector screen. What is the issue with the HDMI connection?? Background We recently had to do presentations at work (school), but the administration only provided VGA means of hooking up to it. Mostly likely reason probably dealt with cost. Anyways, there are several teachers that have brand new Samsung Series 9 ultrabooks (or similar). You know, the ones without VGA support. So I bought an adapter for those ultrabooks. Cable #5 in the picture below. However, both my coworker and I have been wanting to just display our phone screens on the projector. This I knew would require some extra work. What I have VGA cable to projector (cables go through the wall) For laptops HDMI to VGA cable For laptops MHL adapter For 11-pin microUSB phones microHDMI to VGA cable For ultrabooks 11-pin to 5-pin microUSB adapter For older 5-pin microUSB phones) Equipment Projectors 1 projector with VGA and HDMI input (issue is coworkers forget to switch sources) 1 projector with VGA only input Laptops 2 new Samsung ultrabooks w/o VGA or HDMI support 1 ultrabook with VGA and HDMI support several other laptops with at least VGA support 1 tablet with 11-pin microUSB at least 1 new phone with 11-pin microUSB at least 1 old phone with 5-pin microUSB Tested VGA cable (#1) to laptop Good VGA cable (#1) to HDMI adapter (#2) to laptop Good VGA cable (#1) to microHDMI adapter (#5) to laptop Good Projector to HDMI cable (not shown) to MHL adapter (#3) to Galaxy Note 3 smartphone Good VGA cable (#1) to HDMI adapter (#2) to MHL adapter (#3) to Galaxy Note 3 smartphone Does not work!! Extra Notes The 11-pin MHL adapter will not fit inside the 11-pin to 5-pin microUSB adapter so older phones can be displayed on the screen.

    Read the article

  • Fixing corrupt files or corrupt file table on a USB drive?

    - by Kelsey
    I was doing a virus scan on an external USB drive while copying data over to it. While AVG was scanning my system got locked up I think due to the USB drive running out of space and it required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directories but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. I think the file table or whatever it uses to store the index of what exists on the drive has been corrupted since it still shows the the drive as being almost full but everything I do a properties check on says it is 0 bytes. Does anyone know how to 'unlock' or recover this data? Is there a way to rebuild the file table somehow? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • Unable to connect to shared (iscsitarget) dvd-rw drive on ubuntu karmic box

    - by Develop7
    Intro I have desktop with DVD-RW drive that runs primarily on Linux (namely Ubuntu 9.10). My wife has netbook that rins Windows XP with no cd/dvd drive. There's also LAN through our ADSL modem/router. I've "ported" (actually, I've just grabbed sources and ran dpkg-buildpackage) iscsitarget package from Ubuntu Lucid to Karmic (here are packages), installed it (sudo aptitude install iscsitarget; sudo m-a a-i iscsitarget) and configured it in the following way (/etc/ietd.conf): Target iqn.2020-01.local.develop7-desktop:storage.disc.dvdrw Lun 0 Path=/dev/sr0,Type=blockio #I've skipped commented lines Also, I've opened port 3260 with ufw: $ sudo ufw status | grep 3260 3260 ALLOW 192.168.1.0/24 Problem But (here's the trouble) I still can't connect to this target from Windows box. Microsoft Software iSCSI Initiator screams "Logon failure" upon connect attempt, and, respectively, fails to connect. After unsuccessful connection attempt I've noticed this line in dmesg | tail's output: iscsi_trgt: ioctl(299) invalid ioctl cmd c078690d Question So the question is — what's wrong with my config/iSCSI target/whatever else? Or, in short — what I'm doing wrong? Thanks in advance.

    Read the article

  • Unable to connect to shared (iscsitarget) dvd-rw drive on ubuntu karmic box

    - by develop7
    Preface: I have desktop with DVD-RW drive that runs primarily on Linux (namely Ubuntu 9.10). My wife has netbook that rins Windows XP with no cd/dvd drive. There's also LAN through our ADSL modem/router. I've "ported" (actually, I've just grabbed sources and ran dpkg-buildpackage) iscsitarget package from Ubuntu Lucid to Karmic (here are packages), installed it (sudo aptitude install iscsitarget; sudo m-a a-i iscsitarget) and configured it in the following way (/etc/ietd.conf): Target iqn.2020-01.local.develop7-desktop:storage.disc.dvdrw Lun 0 Path=/dev/sr0,Type=blockio #I've skipped commented lines Also, I've opened port 3260 with ufw: $ sudo ufw status | grep 3260 3260 ALLOW 192.168.1.0/24 But (here's the trouble) I still can't connect to this target from Windows box. Microsoft Software iSCSI Initiator tells "Logon failure" upon connect attempt. After unsuccessful connection attempt I've noticed this line in dmesg | tail's output: iscsi_trgt: ioctl(299) invalid ioctl cmd c078690d So the question is — what's wrong with my config/iSCSI target/whatever else? Or, in short — what I'm doing wrong? Thanks in advance.

    Read the article

  • Networking lost after update from Debian Wheezy to Jessie

    - by Charaf
    I am currently setting a Virtual Machine for development purposes. I did a big part of this configuration under Wheezy, but I need some debs that were available only on Jessie. So, I've updated the sources.list and did a dist-upgrade. Everything went well, but after the reboot, I noticed that I lost all the networking. Repositories are unreachable, as well as a simple ping google.fr returns nothing. What can I do to quickly restore networking so that I can continue my working. I have a poor connexion and can not afford to download the whole install DVDs. root@vm~# ifconfig lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:65536 Metric 1 RX packets:452 errors:0 dropped:0 overruns:0 frame:0 TX packets:452 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:164238 (160.3 KiB) TX bytes:164238 (160.3 KiB) root@vm~# I am running VMware 1.0.1 build 1379776 and the last update of Jessie (debian 3.14.4-1) Please help. Thanks.

    Read the article

  • Files built with a makefile are disapearing (including the binary)

    - by Reid
    I am building a program on a TS-7800(SBC), and when I run make (show below), it appears to go through all of the steps normally, but in the end i do not get a binary file. Why is this, and how can I get my file. makefile CC= /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc # compiler options #CFLAGS= -O2 CFLAGS= -mcpu=arm9 #CFLAGS= -pg -Wall # linker LN= $(CC) # linker options LNFLAGS= #LNFLAGS= -pg # extra libraries used in linking (use -l command) LDLIBS= -lpthread # source files SOURCES= HMITelem.c Cpacket.c GPS.c ADC.c Wireless.c Receivers.c CSVReader.c RPM.c RS485.c # include files INCLUDES= Cpacket.h HMITelem.h CSVReader.h RS485.h # object files OBJECTS= HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o HMITelem: $(OBJECTS) $(LN) $(LNFLAGS) -o $@ $(OBJECTS) $(LDLIBS) .c.o: $*.c $(CC) $(CFLAGS) -c $*.c RUN : ./HMITelem #clean: # rm -f *.o # rm -f *~ Output root@ts7800:ReidTest# make /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c HMITelem.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Cpacket.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c GPS.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c ADC.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Wireless.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Receivers.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c CSVReader.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RPM.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RS485.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -o HMITelem HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o -lpthread Thank you.

    Read the article

  • How to create VHD disk image from a Linux live system?

    - by Federico
    Once more, I have to resort at the experts here at SuperUser, as my other sources (mainly Google ;-)) didn't prove very helpful... So basically, I would like to create a VHD image of a physical disk to be archived/accessed/maybe even mounted in a virtual machine. Now, there are dozens of articles and tutorials on how to do that on the web, but none that meets exactly the conditions I would like to achieve: I would like the destination file to be a VHD image, as Windows 7 can mount it natively, even over the network and many other programs can use it (VirtualBox, ...) The disk I'm trying to image contains a Windows XP install, so in theory, I could use the disk2vhd utility, but I would like to find a solution that doesn't require booting that Windows XP install (ie keep the disk read-only) Thus I was searching for a solution involving some sort of live system (running from a USB stic or the network) However, all the solutions that I've came across either make use of disk2vhd or use the dd command under linux, which does a complete copy of the disk (ie even empty blocks) and does not output a VHD file. Is there a tool/program under Linux that can directly create a VHD file? Or is is possible to convert a raw disk image created using dd to a VHD file, without allocating space for the empty blocks? How would you proceed? As always, any advice or comment is highly appreciated!!

    Read the article

  • How do I find funny pictures?

    - by Hanno Fietz
    No, not lolcats. And I'm not really looking for a specific site, either. I have often wished that I had some funny picture to illustrate a presentation, a website, a post, an email, or something else. Google image search and stock photo services have hardly ever helped me, although that may be because I'm doing something wrong. Jeff Atwood seems to have no problem to find funny pictures for his codinghorror and stackoverflow blogs, as well as for the error messages on the trilogy sites. One of my favourites was this elephant. Other bloggers also seem to be quite good at it. I'm wondering if I simply lack the creativity or if there's sources or methods I don't know about. I could think of the following ways to get pictures, but I'm not sure whether this is really "how they do it". keep a collection of pictures that you stumbled upon and liked (takes quite some time to build up to a proper library), when you need a picture, there's one in there maybe have pictures on paper, too, like from magazines or ads. when you are looking for a picture, search online (Flickr, Google, stock photos). This has never really worked for me, I don't know why. produce the pictures yourself, i. e. have a good library of source material or find some online and apply some creativity and suitable software. I could imagine that this could work well once you have the practice.

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • .NET 3.5 installation comes up with Error 0x800F0906, then 0x800F0081F using dism

    - by Austin Meadows
    I've recently tried installing .NET 3.5 for an application on Windows 8.1. I used the OS's popup thing to download/install .NET 3.5 and always get error code 0x800F0906. Upon further research, I found I would have to pop in my Windows 8 CD and install it with this command, where "E:\" is where my CD is mounted: Dism /online /enable-feature /featurename:NetFx3 /All /Source:E:\sources\sxs /LimitAccess This and any derivative of it (e.g., removing /LimitAccess) has not worked for me and has either given me the same error code (0x800F0906) or a different one, 0x800F0081F. I've even copied the sxs folder to my hard drive, just in case something was going on with the CD Drive, only to have the same results. In that case, I used this command line: Dism /online /enable-feature /featurename:NetFx3 /All /Source:C:\dotnet35 /LimitAccess I find this surreal because in both cases, the files are indeed there but the program thinks it's not. Here's the CBS.log file. Any ideas on how to fix this? Any help is very appreciated :) EDIT: I now have a proper dism.log file, I'm not sure what happened to the last one or why it did that. Here's the link to the new log file. It's interesting to note that it doesn't recognize some of the commands in the script such as "featurename" or "source".

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

  • Which ports are needed for NTLM (Windows Authentication) to connect to SQL Server?

    - by Adam Bellaire
    I've got SQL server running on a machine which is not in a domain, and which is not operating in mixed mode (it's running with "Windows Authentication"). I'm trying to connect to it from a Linux web server running freetds via TCP/IP, using NTLM to authenticate. The firewall on the SQL server is very restrictive. 1433 is open to my web server, but I'm getting conflicting information from the web on what additional ports (TCP/UDP) are needed for NTLM to succeed. It is currently fail; I can talk on 1433 to request NTLM, but the actual authentication always fails. One source says 137, 138, 139, but those are just the NetBIOS ports. Do I really need those? Another source says 135. Still others seem to say 1434... I can't make heads or tails of it. Dammit Jim, I'm a programmer, not a network administrator! EDIT: The exact error message: Msg 18452, Level 14, State 1, Server , Line 0 Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. Msg 20002, Level 9, State -1, Server OpenClient, Line -1 Adaptive Server connection failed I am attempting to connect with a remote machine username, i.e. 'servername\username'. Some sources recommend that I set up mirrored accounts on the local and remote machines, but the local machine is running Linux, not IIS under Windows.

    Read the article

  • Postfix selective header_checks: smtpd_relay_restrictions vs. smtpd_recipient_restrictions

    - by luke
    Some of my customers implemented commercial software that violate email-RFCs such that we have had to relax our header checks. In consequence, we receive more spam. Prolog: I know the domains (customer.com) and IP-addresses (a.b.c.d/C) these emails come from Kind request for help: Is it possible to setup one Postfix (2.11) instance on Linux such that: It applies only some header checks for emails from .*@customer.com But applies all header checks for all other email sources? I thought of a combination of mynetworks that includes the subnet a.b.c.d/C in smtpd_recipient_restrictions -- allowing all these messages through -- and simultaneously avoid an open-relay with smtpd_relay_restrictions. However, this has not worked out as expected. Any idea or help is highly appreciated. Thanks in advance. Luke ==EDIT== For the current issue, I solved the problem by prepending REDIRECTs to header_checks as follows: /^received: from.*customer.com.*by mail.own.com.*for.*luke@own.*/ REDIRECT [email protected] This works so far as neeeded. Irrespective thereof, I am still looking for a postfix configuration that would turn this text-based setting into an IP-Address-Range based forwarding rule.... Thanks. Luke

    Read the article

  • SPDIF passthrough not working in Windows 7

    - by adriangrigore
    Hi, I'm running Windows 7 on a computer with an Audigy Platinum eX sound card connected to a surround receiver via optical cabling. Sound works fine when listening to non-surround audio sources, such as windows sounds or MP3. However, when I view a DVD in Media Center and the SPDIF passthrough kicks in, I can only hear an awful noise instead of the movie soundtrack. Also, the receiver does not show the Dolby Digital or DTS symbol, but stays at Dolby Prologic, so it seems it doesn't identify the sound encoding properly. I could switch off SPDIF passthrough and use the sound card's decoder instead, but that's not an option for me since it would create more problems with regular MP3 playback via additional Stereo Receiver which is also connected to the same sound card. I've tried both the default Audigy drivers that come with Windows 7 and the latest drivers from the Soundblaster website, but the problem remains unchanged. Also, I have ensured that the receiver's Dolby Digital decoder is not broken by successfully connecting it to my PS3 to view a Dolby Digital DVD. Besides, SPDIF passthrough was working fine in Vista before I upgraded to Windows 7. Is there anything else I could try?

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Apt pin and self hosted apt repo

    - by Hamish Downer
    We have our own apt/deb repository with a handful of packages where we want to control the version. Crucially this includes puppet, which can be sensitive to versions being different. I want our desktops to only get puppet from our repository, but also for people to be able to add their own PPAs, enable backports etc. The current problem we have is backports on Ubuntu Lucid. Some important lines from /etc/apt/sources.list: deb http://gb.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://deb.example.org/apt/ubuntu/lucid/ binary/ And in /etc/apt/preferences.d/puppet: Package: puppet puppet-common Pin: release a=binary Pin-Priority: 800 Package: puppet puppet-common Pin: release a=lucid-backports Pin-Priority: -10 Currently policy says: $ sudo apt-cache policy puppet puppet: Installed: (none) Candidate: (none) Package pin: 2.7.1-1ubuntu3.6~lucid1 Version table: 2.7.1-1ubuntu3.6~lucid1 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-backports/main Packages 100 /var/lib/dpkg/status 2.6.14-1puppetlabs1 -10 500 http://deb.example.org/apt/ubuntu/lucid/ binary/ Packages 0.25.4-2ubuntu6.8 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-updates/main Packages 500 http://security.ubuntu.com/ubuntu/ lucid-security/main Packages 0.25.4-2ubuntu6 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid/main Packages If I use n= instead of a= then I get Package pin: (not found) I'm just plain confused at this point as to what I should use. Any help appreciated.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >