Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 282/405 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • How to make a huge ram drive?

    - by Brandon Moore
    At my old job when a report was needed I could sit down with someone and pull up results and get immediate feedback, and then refine my queries and ultimately have the data we needed, in the format we needed within 30-90 minutes. I just started working for a new company with a database containing millions of records and I spent my whole 8 hours making a report that I feel I could have made in less than 2 hours if it were not for the massive amount of data the queries are working with, and the fact that I couldn't ask the person needing the data to sit down with me and give me feedback as I pulled up results as I am used to. So I am trying to think of how we can make the server faster... much faster, so that I can have the same level of productivity I'm used to. One thought that just came to mind is that memory is so cheap these days, and by my calculations I could buy 10 8gig ram sticks for 1000 bucks. What I have never heard of though is a device that would let me combine these into a huge ram drive. So I'd like to know if any such device exists, and if not what is the largest ram drive I could realistically make and how would I go about doing so? EDIT: To you guys who are saying the database shema needs to be analyzed... you can't make a query such as "Select f1, f2, f3, etc from SomeTable" run any faster by normalizing or indexing the table. What I'm talking about IS ABSOLUTELY a need for improved performance at the hardware level. I am used to having results come back to me in a few seconds, not a few minutes or much less a half an hour. Maybe that's what you guys are used to who have 100 billion record tables and you feel like that's fast, but I'm looking for results back from tables with about 10 million records to come back to me withing less than half a minute TOPS.

    Read the article

  • Internal and External DNS from Different Servers, Same Zone

    - by Shane
    Hello All, I am either having trouble understanding how DNS works, or I am having trouble configuring my DNS correctly (either one isn't good). I am currently working with a domain, I'll call it webdomain.com, and I need to allow all of our internal users to get out to dotster to get our public DNS entries just like the rest of the world. Then, on top of that, I want to be able to supply just a few override DNS entries for testing servers and equipment that is not available publically. As an example: public.webdomain.com - should get this from dotster outside.webdomain.com - should get this from dotster as well testing.webdomain.com - should get this from my internal dns controller The problem that I seem to be running into at every turn is that if I have an internal DNS controller that contains a zone for webdomain.com then I can get my specified internal entries but never get anything from the public DNS server. This holds true regardless of the type of DNS server I use also--I have tried both a Linux Bind9 and a Windows 2008 Domain Controller. I guess my big question is: am I being unreasonable to think that a system should be able to check my specified internal DNS and in the case where a requested entry doesn't exist it should fail over to the specified public dns server -OR- is this just not the way DNS works and I am lost in the sauce? It seems like it should be as simple as telling my internal DNS server to forward any requests that it can't fulfill to dotster, but that doesn't seem to work. Could this be a firewall issue? Thanks in advance

    Read the article

  • Bad results converting PDF to EPS on Linux

    - by Tim
    I'm having some trouble converting PDFs (created by Adobe Illustrator on a Mac) to EPS. I have tried several things but I am wondering if there is a better option. The following list is ordered by decreasing quality: inkscape --export-area-page --export-eps=out.eps in.pdf using the graphical program Inkscape works best, but is a bit slow; pdftops -eps in.pdf out.eps uses Poppler and works good and is fast; pdf2ps in.pdf out.eps uses ghostscript and works ok for simple documents; convert in.pdf out.eps uses ImageMagick and always rasterizes the image. I haven't tested the following: acroread -toPostScript use acroread (Linux only) Some issues I've found: Transparency is not supported in EPS, but instead of flattening the layers, most programs rasterize the image producing big files and ugly graphs. Inkscape does this best by only rasterizing the unsupported area. Gradients are rendered properly by Inkscape, but Poppler somehow chops up the gradient into many shapes of different colors. Greek symbols are seemingly not supported by Ghostscript and are rasterized (using pdf2ps). What are your experiences for this kind of task? Did I forgot certain programs and/or command line options that improve quality? I found some posts on this, but not a (thorough) comparison of possibilities, please correct me if I'm wrong. Related posts How to convert PDF to EPS? on TeX

    Read the article

  • Hyper-V 2008 R2 synthetic networking stops working with linux 2.6.32.15

    - by luxifer
    Hi there, so I thought I'd give Hyper-V on Windows Server 2008 R2 Enterprise a try on my Homeserver (yes, it's legit... got it from msdnaa). First thing to throw at it was my firewall which runs IPFire. This distribution currently uses the kernel version 2.6.32.15 and comes with the Hyper-V drivers. So I enabled them and at first they work just fine but after a few minutes they just fail. There are no packages going in or out anymore until I reboot the VM but sometimes even that won't work so the VM just keeps "Stopping" like forever. Emulated networking works fine but it slow and uses more CPU. That way my firewall routes slower than when running under virtualbox on an atom N270. My server has an E6750; VM is limited to 25%, but that should still outperform this atom CPU especially since it's never going anywhere near 100% CPU load, so give me a break! A quick google search led me to people having the same problem (even with other distributions and kernel versions that include those drivers) but no solution yet... I already found this but I can't quite follow the author on the part where he solved the issue - especially since I need two virtual nics for my firewall distro to work (obviously one internal and one external) What am I missing here?

    Read the article

  • Usage of putty in command line from Hudson

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij EDIT: These are the build logs: [workspace] $ cmd /c call C:\WINDOWS\TEMP\hudson7429256014041663539.bat C:\Hudson\jobs\Artifact deployer\workspace>putty -ssh -2 -P 22 USER@SERV_ADD -pw PASS -m com.txt Le build a été annulé Finished: ABORTED And the Hudson.err.log file at the same time (after a stop): 3 juin 2010 18:27:28 hudson.model.Run run INFO: Artifact deployer #6 aborted java.lang.InterruptedException at java.lang.ProcessImpl.waitFor(Native Method) at hudson.Proc$LocalProc.join(Proc.java:179) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1241) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) My shell script only write "hello" in a "hello.txt" file on the server, and nothing is done.

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Unable to install mysql-server in Ubuntu

    - by Arihant
    I am unable to install mysql-server on my ubuntu 9.10 server machine. When using apt-get install mysql-server the output is : # apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 120 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up mysql-server-5.1 (5.1.37-1ubuntu5.4) ... * Stopping MySQL database server Mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I cant find a satisfactory solution to this problem anywhere. Many sites tell to reinstall it but its not working. Any help will be appreciated. Thank you..

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

  • Java applet class never found

    - by Andrew
    I am having a problem with the java plugins of all three major browsers: chromium, firefox and Internet Explorer. The issue is that applets will always fail to load and yield errors no matter what website or what applet I am trying to use. I can not see any of the example applets on oracle's website; I can not launch the minecraft browser applet; nor can I launch any applet across the web using any browser. I always receive a "class not found:AppletClassHere.class" error. I know that my browser plugin must be working correctly because I can launch the Cisco Nac applet each time I connect to my network and I can launch my own simple applets offline. I am connecting to the internet through a vpn in case that is important. It also seems like a lot of other people I know are having this issue. Re-installing the plugin does not appear to fix the issue. I am asking this question to find out the cause of this problem, and any possible solutions. Thank you.

    Read the article

  • Java Swing over Remote Desktop - Strange, weird GUI squashing

    - by ADTC
    I thought this question fits SuperUser more than StackOverflow because it's not about actual Java programming, though programmers might be more likely to encounter the problem. Anyway, let me start of with some stats before I ask the actual question: Laptop: Windows 7 x32 Screen resolution 1024 x 768; Nvidia GeForce Go 6200 Connected to desktop via ad-hoc wireless network Access internet via desktop Desktop: Windows 7 x64 Screen resolution 1920 x 1080 Connected to laptop via ad-hoc wireless network Access internet via cable modem I'm connecting to my laptop via Remote Desktop from my desktop to take advantage of the large screen. I'm doing programming on my laptop (for portability reasons). Everything else runs smooth and fast over Remote Desktop as both computers are connected directly over the ad-hoc wireless. The only problem is this: Java Swing apps don't display the GUI properly. I acquired a Java Swing application and I'm debugging it in Eclipse. Here's what I got when I ran the app: Apparently there doesn't seem to be anything wrong with the GUI application I'm debugging, because the Java Control Panel exhibits the same problem. I've searched high and low in Google about this; the closest I came to a solution is this. But sadly, the use of -Dsun.java2d.nodraw=true has no effect at all. This only happens over Remote Desktop. I have tried locally and the GUI apps display properly. This isn't a dealbreaker for me as I can stop using Remote Desktop when developing Java Swing apps. However, I would like to know if anyone has encountered this and found any solution. PS: All software involved (Eclipse, Java JRE, etc.) are latest versions.

    Read the article

  • linux mint VIA sound issue

    - by user2699451
    So I installed linux Mint 15 "Olivia" 64 bit on my Mecer W550EU laptop I have HD Audio with a VIA chipset charles-W55xEU charles # lsmod | grep snd snd_hda_codec_hdmi 36913 1 snd_hda_codec_via 51018 1 snd_hda_intel 39619 5 snd_hda_codec 136453 3 snd_hda_codec_hdmi,snd_hda_codec_via,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 97451 4 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30180 1 snd_seq_midi snd_seq 61554 2 snd_seq_midi_event,snd_seq_midi snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29425 2 snd_pcm,snd_seq snd 68876 19 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_via,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device soundcore 12680 1 snd And my sound card charles-W55xEU charles # aplay -l **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: VT1802 Analog [VT1802 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 2: VT1802 HP [VT1802 HP] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 and my audio device charles-W55xEU charles # lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High `Definition Audio Controller (rev 04)` Subsystem: CLEVO/KAPOK Computer Device 0550 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7c10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Sometimes when I boot up, soundworks, other times it doenst, it is completely random, so far, no-one on xchat linux help or linux mint forums was able to help me, I have always had issues with sound on VIA chipsets I have: sudo apt-get upgrade && apt-get install mint-meta-cinnamon it seemed to help but after 2-3 reboots, the problem came back, btw, everytime I checked, pulse audio is selected to Duplex Audio Input & Output and alsa mixer is always unmuted!

    Read the article

  • How to troubleshoot Hyper-V VSS writer causing backup failure on Server 2008 R2

    - by Tim Anderson
    I have a Windows Server 2008 R2 machine running Hyper-V. Backups using Windows Server Backup fail with the error: The backup operation that started at '?2011?-?01?-?02T10:37:01.230000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. I have traced this to a problem with the Hyper-V VSS writer. vssadmin list writers reports: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {fcf0dd79-d282-4465-88ae-7b6857e055c2} State: [8] Failed Last error: Inconsistent shadow copy However I can't get any further. A few relevant facts: I get the error even if all the VMs are shut down If I disable the Hyper-V VSS Writer by stopping the Hyper-V Management Service backup completes OK There are no errors in the Hyper-V-VMMS application log I tried to set tracing for VSS but can't get any output for some reason. I set the correct registry entries but no trace log is generated. Tim

    Read the article

  • Network Explorer Intermittently Fails to Display all Computers in Work Group

    - by graf_ignotiev
    I run a small computer lab of 10 computers and occasionally, when using the network explorer (a.k.a Network Browser) some or all of the remote computers will fail to appear. If I try to access a remote computer by its name I get an unspecified error (code 0x80004005), but I am still able to access it with the computer's IP address. The strangest part is that the problem will inexplicably go away after waiting awhile. Each computer is running Windows 7 x64 Enterprise and has identical hardware, software and configuration. They are all on the same subnet and in the same workgroup. I've spent days researching the problem and have tried the following solutions: Updated the BIOS, chipset and network adapter drivers Changed Power Settings in Network Adapter Properties so that the computer will not turn it off Disabled the Computer Browser service Changed the DHCP node type to broadcast Reviewed the Event Viewer logs Steps 3 and 4 have seemed to help the problem a little bit, but not completely. I'm beginning to suspect that the problem might lie with our router which is a ZyXEL ZyWALL 2WG, as the packets sent by Network Discovery may not be returning in time, but I wanted to get some perspective in the issue before I went any further.

    Read the article

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • some windows 8 metro (modern UI) apps don't open, but live tiles DO work

    - by DiegoDD
    Some of my default windows 8 metro (modern UI) apps don't work properly, the apps are the Weather and the News apps. when I open them, I only get the loading screen, that is, the big colorful screen with the logo in the center, and the "loading" spinning dots. They just keep like that forever and never actually open. There are no error messages or anything. The strange this is that the live tiles for those apps DO work and show current information. current weather for my location (saved before they started failing) and current news. Both apps did work well in the past, and every other default windows 8 app I've tried work well (both live tiles and the app itself). Every other non-default apps that I've downloaded still work correctly, so the problem so far is only with those two apps. (weather and news). I really don't know since when exactly they started to fail because I really don't use them (or any metro app) that often, so I can't recall if I did something like installing some other app, either metro or regular, or messed with some drivers, etc. they simply don't open now, but they DID work before. any ideas?

    Read the article

  • Why a 10 years old software still is so slow even today?

    - by Cawas
    I just noted this question due to a game (which happens to be Diablo 2), but the matter of fact is: why is my brand new mac book pro, made in 2009 with latest technology (tho it's the cheapest one) can't rival my computer which used to run this much faster back in 2000? Really, it was much faster on my AMD K6 450 back in those days, and I could even run two clients at same time with no slow down. I've always had the feeling this machine was slow, but this is a very odd way to attest it. Granted, the machine is smaller, runs on wifi and "boots" way faster thanks to sleep mode. But other than that, what have we evolved after all?! I'm pretty sure this shouldn't be graphical card's fault. Sure if I buy latest technology it will run fast, and probably most people here can confirm this and won't even understand my question. But the thing is, all the hardware is supposedly much faster and better than the stuff from 10 years ago. The software and operating system became more complex, but also more well refined. Now I'm trying a piece of software that is actually 10 years old and it's not getting any better results! Why?

    Read the article

  • Which ports are needed for NTLM (Windows Authentication) to connect to SQL Server?

    - by Adam Bellaire
    I've got SQL server running on a machine which is not in a domain, and which is not operating in mixed mode (it's running with "Windows Authentication"). I'm trying to connect to it from a Linux web server running freetds via TCP/IP, using NTLM to authenticate. The firewall on the SQL server is very restrictive. 1433 is open to my web server, but I'm getting conflicting information from the web on what additional ports (TCP/UDP) are needed for NTLM to succeed. It is currently fail; I can talk on 1433 to request NTLM, but the actual authentication always fails. One source says 137, 138, 139, but those are just the NetBIOS ports. Do I really need those? Another source says 135. Still others seem to say 1434... I can't make heads or tails of it. Dammit Jim, I'm a programmer, not a network administrator! EDIT: The exact error message: Msg 18452, Level 14, State 1, Server , Line 0 Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. Msg 20002, Level 9, State -1, Server OpenClient, Line -1 Adaptive Server connection failed I am attempting to connect with a remote machine username, i.e. 'servername\username'. Some sources recommend that I set up mirrored accounts on the local and remote machines, but the local machine is running Linux, not IIS under Windows.

    Read the article

  • SQLRelay MySQL compatibility layer in php-cgi.

    - by sybreon
    I am investigating the use of sqlrelay as a middle-layer between an application that uses MySQL with a PostgreSQL backend. I assume that this is something that it can do to ease backend migration. But for the moment, I am just experimenting with a MySQL application accessing a MySQL backend through the sqlrelay layer. app => sqlrelay lib => mysql client lib => tcp => mysql server I followed the instructions for the MySQL drop-in replacement and it works. I can connect to the backend MySQL server using both sqlrsh and mysql client application. It will work for most MySQL applications by using LD_PRELOAD with the compatibility layer library. The instructions recommend re-compiling php to support it. I would prefer not doing something so drastic. They also recommend setting the LD_PRELOAD for apachectl as a method for the apache/php stack. However, this does not work with lighttpd/php-cgi. I have wrapped php-cgi with a shell script that sets LD_PRELOAD before running the cgi script. LD_PRELOAD=/usr/lib/libmysql50sqlrelay-0.39.4.so.1 /usr/bin/php5-cgi $@ I can see LD_PRELOAD correctly set in phpinfo() but the cgi scripts all fail and are unable to connect to the database. According to the mysql client, the compatibility library should report itself as a 5.0.0 client but the phpinfo module reports itself as the actual 5.0.51a client library used. This means that the compatibility library was not used. Is there someone who has had some success doing something similar?

    Read the article

  • What could cause a WMV to not play to completion in a browser?

    - by Ty W
    A realtor has had videos created for a community she is selling homes for, the people who made the videos gave them to us in WMV format. I can play these videos without any problem in Windows Media Player, VLC, and Quicktime (via Flip4Mac). I can play the videos from their location at videohomeguide.com in my browser without any trouble. However when I upload the files to our server the video stops at about the 1 minute mark in Safari and FireFox on Mac OS X Snow Leopard. I'm not sure if Windows browsers have the same issue because they are loaded using Windows Media Player. http://carolepaul.com/images/uploads/cottageslsjamestown.wmv <- our server, will fail at 1:09ish. http://www.videohomeguide.com/media/cottageslsjamestown.wmv <- should play to completion (3:27ish) The files generate the same MD5 hash on my desktop and on our server. I used WGET to transfer the files, always downloading from videohomeguide.com. Since the files are identical and are playable using VLC/WMP/Quicktime, and playable in the browsers from videohomeguide.com it seems to me that it is some sort of server config... maybe incorrect headers sent to the browsers? Here are the headers sent and received by FireFox on OS X: http://carolepaul.com/images/uploads/cottageslsjamestown.wmv GET /images/uploads/cottageslsjamestown.wmv HTTP/1.1 Host: carolepaul.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive HTTP/1.1 200 OK Date: Mon, 29 Mar 2010 20:43:20 GMT Server: Apache/1.3.41 (Unix) PHP/5.2.6 FrontPage/5.0.2.2635 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b Last-Modified: Wed, 02 Dec 2009 18:08:46 GMT Etag: "1e7919c-198eadc-4b16ad2e" Accept-Ranges: bytes Content-Length: 26798812 Keep-Alive: timeout=10, max=200 Connection: Keep-Alive Content-Type: video/x-ms-wmv

    Read the article

  • Is a rubber keyboard suitable for heavy use?

    - by Vilx-
    Every keyboard wears out with time, and mine has some age already. The day it fails is coming closer and closer. So I'm slowly starting to look around for a new one. I use the keyboard for gaming and programming, so it gets some pretty solid use. I also tend to eat by the computer, so there's plenty of... uhh... lifeforms down there. Anyway, I was looking at these rubber keyboards. They come pretty cheap (my local computer shop has one for less than $20) and they seem to have some nice properties. They can be easily cleaned, they're quiet, and can be rolled up when needed (plus no worries about spilled drinks). However I'm wondering what their type-ability is. If I can't write on it at a decent speed, the rest of the features don't matter. Not that I'm a fast typer, but being a professional progammer does give a boost to the skill. I couldn't find any reviews on the net so I'm turning to you. Who has used these keyboards and what was your experience? Perhaps there is something else I haven't though of why such a keyboard would not be a good idea?

    Read the article

  • Debian apache2 restart fault after some updates

    - by Ripeed
    can anyone give me an advice with this please: I run update on my debian server by Webmin. After updating some apache2 and etc. It shows update fail. After that I cant start apache2. I must run netstat -ltnp | grep ':80' Then kill pid kill -9 1047 and now i can start apache2 When I started it first time after update some websites on fastCGI wont work I must change them in ISPconfig3 to mod-PHP and now works NOW - I cant restart apache without kill pid. In log of ISP I see Unable to open logs (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down In log of some website I see [emerg] (13)Permission denied: mod_fcgid: can´t lock process table in pid 19264 Do you thing it will be solution update everithing by: apt-get update and apt-get upgrade to complete all updates? I have little scare if I do that then next errors will occur. If I look at apache log i see error: Debian Python version mismatch, expected '2.6.5+', found '2.6.6' But that was there before that problem before. Thanks A LOT for help.

    Read the article

  • Windows 8: Clean Install Fails BEFORE country Screen

    - by Mark
    G'day, I've been trying to (clean) install Windows 8 on my PC now for over two weeks now, and it's really getting old. You can see here that I've outlined my issue to the Windows Technical Community, which has resulted in... absolutely no help at all. http://answers.microsoft.com/en-us/windows/forum/windows_8-windows_install/windows-8-clean-install-fails-before-country/64229d0a-0220-45a9-bdc6-c41062df8a75 Tl;dr? Yeah, me too. Basically, I've shrunk the HDD, it's working (I'm on that PC on XP). I've put both the x64 and the x86 DVD's, AND an another HDD with W8 installed on it from my test machine, and they ALL Fail to load. (I get the slanty windows logo, and after about 10 seconds BOOM. Sad face error screen.) I really don't want to have to remove the video card, or the unevenly/not partnered DIMM in the 2nd Memory channel - because the case is .. stupid, and in an awkward spot, but I'm running out of ideas! PS. I tried turning on ACHI tonight. All that resulted in was XP wigging out about new drivers & explorer.exe crashing. Fun times! :(

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • VSFTPD 530 Login incorrect

    - by sc.
    I'm trying to get a vsftpd server set up on CentOS 5.3 x64. I'm not able to get any local user login's to work. Here is my vsftpd.conf: local_enable=YES write_enable=YES pam_service_name=vsftpd connect_from_port_20=YES listen=YES pam_service_name=vsftpd xferlog_std_format=NO log_ftp_protocol=YES chroot_local_user=YES Here is the output of vsftp.log: Mon Sep 13 23:34:44 2010 [pid 19243] CONNECT: Client "10.0.1.138" Mon Sep 13 23:34:44 2010 [pid 19243] FTP response: Client "10.0.1.138", "220 (vsFTPd 2.0.5)" Mon Sep 13 23:34:44 2010 [pid 19243] FTP command: Client "10.0.1.138", "USER dwelch" Mon Sep 13 23:34:44 2010 [pid 19243] [dwelch] FTP response: Client "10.0.1.138", "331 Please specify the password." Mon Sep 13 23:34:44 2010 [pid 19243] [dwelch] FTP command: Client "10.0.1.138", "PASS <password>" Mon Sep 13 23:34:44 2010 [pid 19242] [dwelch] FAIL LOGIN: Client "10.0.1.138" Mon Sep 13 23:34:45 2010 [pid 19243] [dwelch] FTP response: Client "10.0.1.138", "530 Login incorrect." And the output of the secure log: Sep 13 17:40:50 intra vsftpd: pam_unix(vsftpd:auth): authentication failure; logname= uid=0 euid=0 tty=ftp ruser=dwelch rhost=10.0.1.138 user=dwelch It looks like pam is not authenticating the user. Here is my /etc/pam.d/vsftp file: #%PAM-1.0 session optional pam_keyinit.so force revoke auth required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed auth required pam_shells.so auth include system-auth account include system-auth session include system-auth session required pam_loginuid.so Can anyone see what I'm missing? Thanks.

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >