Search Results

Search found 20631 results on 826 pages for 'release management'.

Page 710/826 | < Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Install Windows 8 on SSD and Program & Users on HDD

    - by Foe
    I have been dealing with a few problems while installing Windows 8 on my computer. On my old configuration, I had Windows 7 installed on my 60Go SSD, and my programs and user data on my 1To HDD, thanks to relative links. Yet, while installing Windows 8 on my SSD, he made a small partition on my HDD "System related". Plus, I'm afraid using only links is a bit cheap, and I saw lots of people messing with their registry when trying to put user data on another drive. I read a lot about optimizing Windows 8 for SSD, putting Users on another drive, and very similar situation that didn't quite correspond to what I was trying to achieve. Here's what I tried : http://www.eightforums.com/tutorials/4275-user-profiles-relocate-another-partition-disk.html Booting on Audit mode and using an XML to relocate Users didn't work as the specified version in the file is a test one, and I don't what to enter if I'm using the last release. Booting with the install DVD in repair mode to do a copy of the User and create a relative link, resulting in an error on the logon screen while entering my password saying that "The profile can't be load" (average translation of my error from french to english) Do anyone know how to do a clean separated install of Windows 8, with OS on a drive, and the other data on a second one ? Thanks.

    Read the article

  • Problem running MVC3 app in IIS 7

    - by mjmoore99
    I am having a problem getting a MVC 3 project running in IIS7 on a computer running Windows 7 Home-64 bit. Here is what I did. Installed IIS 7. Accessed the server and got the IIS welcome page. Created a directory named d:\MySite and copied the MVC application to it. (The MVC app is just the standard app that is created when you create a new MVC3 project in visual studio. It just displays a home page and an account logon page. It runs fine inside the Visual Studio development server and I also copied it out to my hosting site and it works fine there) Started IIS management console. Stopped the default site. Added a new site named "MySite" with a physical directory of "d:\Mysite" Changed the application pool named MySite to use .Net Framework 4.0, Integrated pipeline When I access the site in the browser I get a list of the files in the d:\MySite directory. It is as if IIS is not recognizing the contents of d:\MySite as an MVC application. What do I need to do to resolve this?

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • Screen recording in Windows 8 makes PC unusable?

    - by Skadier
    OS used: Windows 8 Pro x64 Hi there, I got a weird problem. I tried to record my screen using different screen recording applications like Camtasia7, Hypercam, and some others. So if I hit "record" everything works fine for about 3-4 seconds. Then the PC slows down heavily caused by extremly high IO usage. The PC gets nearly unusable and laggs like hell. I can't switch applications, can't get TaskManager with Ctrl+Alt+Del, or something else without waiting 1-3 Minutes. After about 5-8 Minutes and a bit of luck I am sometimes able to end the process of the recorder and IO calms down very slow(100% IO lasts for about 2 Minutes before becoming normal). I don't know why this happens. Screen recorders that support Windows 8 aren't out afaik but at least Camtasia7 worked with Windows 8 Developer Preview and Windows 8 Release Preview (only had to change record method to .avi). No slow downs or something else. Is there anyone out there who knows this problem too? Is there maybe a solution to this problem?

    Read the article

  • Trying to migrate old server to new. Getting duplicate name errors

    - by SpaceCowboy74
    I have an existing server on my network that is running under windows 2000 with SQL Server 2000 on it. We are in the process of moving the server to a windows 2008 platform, with SQL 2008 as well. A few changes are happening though. For one, applications that were on the old server, will now be on a new application server. The issue is, the developers of the original applications hard coded the server name in the apps and/or batch files. I could change all the code, but that would require weeks of work. My original idea was to change the hosts and lmhosts files to point to the new servers with a different IP. So i implemented the following where oldserver was the original server and server is the new one brought online: hosts: 192.168.1.10 oldserver 192.168.1.15 server lmhosts: 192.168.1.10 oldserver #pre 192.168.1.15 server #pre Problem is, when i try to do this, i get the following errors: \\server\c$ Logon Failure : The target account name is incorrect. and \\oldserver\c$ A duplicate name exists on the network. I know about renaming servers in AD, but can't do so yet as the original server is in production and i cannot rename it without breaking a lot of things at the moment. I'm wanting to do a proof of concept to the management before renaming the servers. Any idea how i should resolve this?

    Read the article

  • Allocating More Than 4 GB Of Memory

    - by TPatti
    I am facing an issue with memory allocation. I have: Host OS: Microsoft Windows XP - Professional x64 Edition - Version 2003 - Service Pack 2. Host Physical Memory: 8 GB Guest OS: Red Hat Enterprise Linux WS release 4 (Nahant Update 5). I am not sure if it is 32 or 64 bits. The lsb_release -a command says that argument LSB Version: core-3.0-ia32, so I guess that would be 32 bits... VMware Player Version: 2.5.2 build-156735 I would like that VMware Player could allocate more that 4 GB, but when I go to the setting, it only lists 4 GB. If I choose the "About" option, it actually says that I have 8 GB installed in the host machine. This VMware image created by someone else and provided to me, apparently done with VMware Workstation 5. Why can't I allocate 8 GB? Where is the problem? In the WMware Player Version, Guest OS or Host OS? How can I solve this? I understand that for this version of player there isn't one version for 32 and another for 64 bits.

    Read the article

  • apt unable to install particular version of package (but gives no error)

    - by Arc2009
    I'd like to install particular version of libstd++6 with following command: # apt-get install libstdc++6=4.9.0-8 -V Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libstdc++6 (4.8.2-16) 0 upgraded, 0 newly installed, 0 to remove and 216 not upgraded. It gives no error, but apt keeps version that was already installed. And also it refers to this package as "extra". There's no apt preferences set in /etc/apt/preferences.d. And the desirable version is definetely available through our local mirror. (If I try to run "apt-get download libstdc++6=4.9.0-8" it will download exactly desirable version.) System info: # cat /etc/issue.net "Debian GNU/Linux jessie/sid" # uname -a Linux www27 3.13-1-amd64 #1 SMP Debian 3.13.7-1 (2014-03-25) x86_64 GNU/Linux. # dpkg -l |egrep -i "apt|dpkg" ii apt 0.9.16.1 amd64 commandline package manager ii dpkg 1.17.6 amd64 Debian package management system Any suggestions?

    Read the article

  • Incorrect Internal DNS Resolution

    - by user167016
    I'm having a DNS issue. Server 2008 R2. The first clue was that after being off the network for a month, I could no longer Remote Desktop into my workstation by name, it wouldn't find it. Both via VPN and internally. But if I connect using its IP, that works. Now I notice in the server's Share and Storage Management, in Manage Sessions, it's displaying the incorrect computer name for some users. So I try, for one example: Ping -a 192.168.16.81 Pinging BOBS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: - replies all successful Then I try Ping RICHARDS_COMPUTER Pinging RICHARDS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: -all replies successful In DHCP, .81 belongs to RICHARDS_COMPUTER I did try flushdns. Not sure if this is related, apologies if it's not, but when I try to connect, I also get prompted: "The identity of the remote computer cannot be verified. Do you want to connect anyway? The remote computer could not be authenticated due to problems with its security certificate. It may be unsafe to proceed.." It then lists the correct name as the name in the certificate from the remote computer, but claims that the certificate is not from a trusted authority. Any thoughts are most appreciated!

    Read the article

  • Online FTP or file sharing service [on hold]

    - by Frede
    We need to share large files with clients, e.g. clients upload a large file, we modify it and later make it available for download. Up until now we've used FTP but this has a number of drawbacks. A lot of management of files and setting up accounts etc. We are therefore considering online alternatives. Requirements: Cheap, 8-) Easy to use, ideally just requiring a web browser, but also possible for power users to connect e.g. via FTPS/SFTP No registration requried for users to upload/download files. We ourselves of course need to be able to login an view uploaded files and upload new files. No per user fee High bandwidth. As files may be GBs in size both upload and download speed cannot be too slow Secure. Encryption during upload/download. No way for users to access uploaded files. Once a user has uploaded a file they (or anyone else besides us) should be able to access the file. To download files users get a link with a password. Ideally the link expires after a set time. No software installation We do NOT need any sync features, backup, versioning etc. Just a quick, easy, secure way for us to share files with our clients. Services like JustCloud, DriveHQ etc seems bloated and "too much" for what we need. What other alternatives exist? Thanks!

    Read the article

  • vmware server 64 bit on ubuntu 9.10 64 bit with P2V windows 2003 SBS poor network speed

    - by RobertHC
    configuration is ubuntu 2.6.31-21 64 bit vmware 2.0.2 64 bit last release hardware is core 2 quad with 8GB ram guest is win 2003 server SBS 32 bit Dear friends, we have a converted physical to virtual windows sbs 2003, converted with last converter available nowadays http://www.vmware.com/products/converter/ vCenter converter. Running the P2V 2K3 SBS on vmware server, it does boot fine, but we do note an abnormal CPU activity and a poor lan speed. As attempts we did what follow. We removed all unneeded peripherals, we removed one NIC (phisycal server was 2 nics), we changed the vmx to ged the nic recognized as intel instead than amd, we removed 1 cpu (physical was 2 cpu), we removed anything was reported as failed driver from system events monitor. Nothing to do, no way and funny results. Let's read some tests results. All are made with the same file copied in different source folders. Copying from client side (both directions copy, to/from server) results are i.e. 10 seconds, copying the same files from server side (again from and to server) results are different... from client to server, speed is round about (bit more) 10 seconds, but from server to client direction is slower: double the time. Beeing very fast and launching a simultaneous copy "from server to client"+"from client to server", this made from the server side, results in a stuck traffic... 45 seconds to do the copy. vmware tools are installed and e1000 driver has been updated. With one processor CPU activity is still going up and down but much less than with two. Because of test, we installed win 2k8 STD 64 bit. We repeated all the above tests with exactly the same file result is just one: always 5 seconds (this matches the lan speed) Any idea about this issue is welcome and thank you if any. Kind regards R.

    Read the article

  • Can you explain how to understand what the 'iwconfig' command displays in Ubuntu-9.04?

    - by Shawn
    I'm having trouble making my wireless connection work, and I realized I don't really know how to use the tools I have, in this case, the iwconfig command in Ubuntu-9.04. Here is what I get: ***iwconfig*** - lo no wireless extensions. eth0 no wireless extensions. wmaster0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Network" Mode:Managed Frequency:2.412 GHz Access Point: Not-Associated Tx-Power=20 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 vboxnet0 no wireless extensions. pan0 no wireless extensions. "Network" is the name of my wireless network, btw. But what does this all mean? How can this information help me aquire a working wireless connection? When I try associating a key using sudo iwconfig wlan0 key s:my_key I get the following error message: Error for wireless request "Set Encode" (8B2A) : SET failed on device wlan0 ; Invalid argument. I do have the right key though, so what's the problem?

    Read the article

  • Mail server DNS failed to resolved by Mac clients

    - by Concordus Applications
    We have two internal DNS servers. One is located on a linux server box and the other is the router's DNS management. We set the linux box as primary DNS via DHCP and the router as secondary. We have a few Mac clients that are accessing our internal mail server (hostnamed "mail" internally). When using IMAP or SMTP against the mail server internally, the mac boxes will sometimes fail to locate the server. If I use NSLOOKUP I can see that "mail" is pointed to the correct IP address and is being resolved via the correct DNS server, but if I ping "mail" it fails. ~ (bash)$ nslookup mail Server: 254.254.254.206 Address: 254.254.254.206#53 Name: mail.example.com Address: 254.254.254.205 Note: I replaced our actual internal IP address with 254.254.254.* If I wait a few minutes (3-5 minutes), somehow it resolves itself and sends successfully. This happens multiple times a day. The /etc/hosts file on the mac boxes is the default config. ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Is there something about Mac clients I should know to prevent this failed DNS resolution? Client boxes are: OSX 10.7.4, 8GB RAM, i5 MacBooks Server is: Ubuntu 12.04 Server

    Read the article

  • How do I install MySQLdb on a Python 2.6 source build, on Debian Lenny?

    - by nbolton
    I've installed Python 2.6 from source on my Debian Lenny server, as Lenny does not have the python2.6 package. So, my Python 2.5 has MySQLdb installed and working just fine because I installed the python-mysqldb package. I figured I could just install MySQLdb from source, but because I have the Lenny python-dev package, it builds against 2.5: # python setup.py build running build running build_py copying MySQLdb/release.py -> build/lib.linux-x86_64-2.5/MySQLdb running build_ext building '_mysql' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Dversion_info=(1,2,3,'final',0) -D__version__=1.2.3 -I/usr/include/mysql -I/usr/include/python2.5 -c _mysql.c -o build/temp.linux-x86_64-2.5/_mysql.o -DBIG_JOINS=1 -fPIC gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions build/temp.linux-x86_64-2.5/_mysql.o -L/usr/lib/mysql -lmysqlclient_r -o build/lib.linux-x86_64-2.5/_mysql.so I don't want to run python setup.py install, because I'm afraid it's going to screw up MySQLdb on 2.5 -- should I? I imagine it'd just overwrite 2.5 and do nothing to 2.6 -- maybe there's an argument I can use to install to 2.6? I imagine that I would need also to build against 2.6, so how do I do this?

    Read the article

  • Software disables itself when the PC is accessed via RDP

    - by blckgrffn
    We have a large, specialty printer that has vendor specific software that enables its use outside of it showing up simply as a printer in the Windows Control Panel. This software recognizes when we RDP into the machine and "disconnects" the PC from the printer within its proprietary control panel. All is well when an application like TeamViewer is used to access the machine. Ostensibly, the application is helping us be safe by "enforcing" that the machine used for the printer is a walk up workstation, or so the support folks informed me. If TeamViewer etc, fixes the issue, then what is the problem? We have many headless workstations in our warehouse attached to a variety of specialty machines, all used via RDP. We want/need to keep access to the machines the same for the sanity of our production staff. The meat of the question - how, specifically, might a machine know that it is being accessed via RDP (terminal services management???) and how might this be defeated without altering an application or driver. Of note, the system being used is a Windows 7 Pro machine hooked to the printer via USB. Thanks! Nat edit Is there any combination of /admin switches, etc. that will possibly fix this? Simply putting /admin did not.

    Read the article

  • Discover intended Foreign Keys from JOINS in scripts

    - by Jason
    I'm inheriting a database that has 400 tables and only 150 foreign key constraints registered. Knowing what I do about the application and looking at the table columns, it's easy to say that there ought to be a lot more. I'm afraid that the current application software will break if I started adding the missing FKs because the developers have probably come to rely on this "freedom", but step one in fixing the problem is to come up with the list of missing FKs so we can evaluate them as a team. To make matters worse, the referencing columns don't share a naming convention. The relationships ARE coded informally into the hundreds of ad-hoc queries and stored procedures, so my hope is to parse these files programmatically looking for JOINS between actual tables (but not table variables, etc). Challenges I foresee in this approach are: newlines, optional aliases and table hints, alias resolution. Any better ideas? (Besides quitting) Are there any pre-built tools that can solve this? I don't think regex can handle this. Do you disagree? SQL Parsers? I tried using Microsoft.SqlServer.Management.SqlParser.Parser but all that is exposed is the lexer - can't get an AST out of it - all that stuff is internal.

    Read the article

  • Can you explain how to understand what the 'iwconfig' command displays in Ubuntu-9.04?

    - by Shawn
    I'm having trouble making my wireless connection work, and I realized I don't really know how to use the tools I have, in this case, the iwconfig command in Ubuntu-9.04. Here is what I get: ***iwconfig*** - lo no wireless extensions. eth0 no wireless extensions. wmaster0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Network" Mode:Managed Frequency:2.412 GHz Access Point: Not-Associated Tx-Power=20 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 vboxnet0 no wireless extensions. pan0 no wireless extensions. "Network" is the name of my wireless network, btw. But what does this all mean? How can this information help me aquire a working wireless connection? When I try associating a key using sudo iwconfig wlan0 key s:my_key I get the following error message: Error for wireless request "Set Encode" (8B2A) : SET failed on device wlan0 ; Invalid argument. I do have the right key though, so what's the problem?

    Read the article

  • Installing Debian 7.6.0 on Lenovo Y50

    - by Girauder
    I was trying to install Debian on my new laptop: a Lenovo Y50 64bit running Windows 8. I got together with a friend and installed Debian in his computer first and had no problems. However I've tried to install Debian several times using the AMD64 KDE and netinst versions and accomplished nothing. First try: installed the KDE version. Grub would let me choose which operating system I wanted, but when I selected Debian it would only load the command line. Second try: Reinstalled but this time with the netinst version. I only got a black screen where I could type but nothing else. Third Try. Tried the netinst again. This time after making the partitions I got a message that said that no EFI partition was found. I ignored the message and this time it wouldn't even load Grub. only a command like interface with grub rescue or something. Not once did I get an error during the installation. What am I doing wrong? I assume the problem is I need to make an EFI partition or something like that. So why is it that during the first installations I didn't ask me for that. And if that is indeed the problem, How can I solve it? Update So the installation failed again... as predicted. Here you can find the Disk Management picture. http://postimg.org/image/433cpfkjz/ Please somebody help me. I keep getting the grub rescue thing. secure boot is disabled and legacy support is set first.

    Read the article

  • Adding a 2008 server to a 2003 Domain with DNS devolution?

    - by mvdwege
    I'm running into a problem adding a 2008 server to our existing 2003 domain, and as I am not a Windows admin, I'm not getting the problem here. Some reading around on Technet seems to indicate that DNS devolution is the issue. Here's the setup: DNS for the entire company is hosted on a Unix server running Bind, including the service records for the Windows domain. Our toplevel is company.local, and functional domains are in subdomains, such as mgt.company.local (our management servers). Our Windows servers live mostly in office.company.local, but some of them live in .mgt.company.local and .customers.company.local. The 2003 servers all succesfully authenticate against company.local as the Windows domain. Their position in the infrastructure is set by setting the primary DNS suffix under the network settings and the computer name dialog. Trying to do the same with a brand new 2008 install throws an error though: "Changing the Primary Domain DNS name of this computer to office.company.local failed [...] The specified server cannot perform the requested operation" I tried googling, but the closest I came was the Technet article on DNS Devolution, and I can't make heads nor tails on how to apply that to my case. Addendum 2012-10-23: The problem is not joining the domain, that works, the problem is that it joins with the wrong name, as .company.local, instead of .office.company.local. So far everything works, but I'm rather afraid to run production like this, because sooner or later something is going to complain about the AD name not matching DNS.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Sending 10,000 emails?

    - by Philipp Lenssen
    (I don't know whether to submit this to SuperUser or StackOverflow.) My friend and I have a subscriber list -- strictly opt-in, for people interested when we release new projects -- and we may want to send out a first email this year. How would one best send an email to over 10,000 people? Perhaps there is a good, I suppose paid service, which would do this for one? The key being that the emails ought arrive, with good chances of not being immediately labeled as spam due to the sender or something (as it's definitely not spam). When I simply send emails from my 1&1 server, they often immediately make it into the spam folder, even though when sending the same email through say Google Appspot, it arrives fine. FWIW, the email addresses are not yet verified (people simply signed up for the list by providing an email, and optionally a name). Analyzing what bounced may be a pro. BTW, we will provide an opt-out link with every mail. Thanks for any pointers, and sorry if this doesn't fit to ServerFault 100%!

    Read the article

  • VM automatic provisioning advice

    - by jdgregson
    In my lab we have 24 workstations, each with five technician-maintained virtual machines set up in VMware Workstation. These provide a lot of management overhead, as we have to update them as well as the host operating systems every three months (the start of the next quarter), which adds up to 144 systems to update instead of just 24. Whenever we need to reimage the hosts, the VMs add another 130GB to each image, which is over 3TB of extra data to send over the network, and a lot more time to apply each image, and then we still have to boot all 120 VMs and assign them a unique IP Address and host names. We would like to get the VMs off the hosts and onto a server, but after looking around for a few days, I still don't know where to begin looking for a solution. There may be a better way to do this, but in my mind, the ideal solution would be to replace the VMs on the host machines with five Thin Client operating systems, each configured to connect to a server and be sent or connected to a unique virtual machine. We can't have 120 VMs running on the server all the time, so the server would have to create a copy of the VM from a template whenever a student tries to boot one, and destroy the VM after the student is finished with it. If there is another client application that has to be installed on the hosts that would be fine, the only reason I'd like to keep them in VMware Workstation is because students already know to look there for the VMs when they need to use them. What, if any, virtualization software will allow this? Is there some other solution I'm not seeing?

    Read the article

  • Ant get task throws "get doesn't support nested resources element" error

    - by David Corley
    The following ant xml should work according to documentation, but does not. Can anyone tell me if I'm doing something wrong. The get task should support the nested "resources" element in Ant 1.7.1 which is the version I'm using: -- <target name="setup"> <tstamp/> <!-- set up work areas --> <!--<taskdef name="ccmutil" classname="com.allfinanz.framework.tools.CCMUtil" classpath="\\Abate\Data\Build_Lib\Ivy\com.allfinanz\ccmutil\1.0\ccmutil-1.0.jar"/>--> <!-- 1st one is special, also sets ${project_wa} --> <!--<ccmutil file="${ant.file}" projects="framework, xpbuw, xpb, bil"/>--> <property name="framework_wa" value="../../../framework"/> <property name="xpbuw_wa" value="../../../xpbuw"/> <property name="xpb_wa" value="../../../xpb"/> <property name="bil_wa" value="../.."/> <!-- Create properties to hold the build values --> <property name="out" value="${user.dir}"/> <!-- This may be overridden from the command line --> <property name="locale" value="us"/> <!-- set contextRoot up as a property - this mean that it can be overwritten from the command line e.g.: ant -DcontextRoot=xpertBridge. --> <property name="contextRoot" value="xpertBridge"/> <property name="build_dir" value="${out}/${release}/build"/> <property name="distrib_dir" value="${out}/${release}/distrib"/> <property name="build.number" value="-1"/> <!-- Download dependencies from repo.fms.allfinanz.com--> <get dest="${lib}"> <resources> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=central&amp;g=soap&amp;a=soap&amp;v=2.3.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=JBOSS&amp;g=apache-fileupload&amp;a=commons-fileupload&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=regexp&amp;a=regexp&amp;v=1.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=javax.mail&amp;a=mail&amp;v=1.2&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.ibm.ws.webservices&amp;a=webservices.thinclient&amp;v=6.1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=avalon-framework&amp;a=avalon-framework&amp;v=4.2.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=jimi&amp;a=jimi&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=batik&amp;a=batik-all&amp;v=1.6&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=bsf&amp;a=bsf&amp;v=2.3.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=rhino&amp;a=js&amp;v=1.5R3&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=central&amp;g=commons-io&amp;a=commons-io&amp;v=1.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=central&amp;g=commons-logging&amp;a=commons-logging&amp;v=1.0.4&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=xmlgraphics&amp;a=commons&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=barcode4j&amp;a=barcode4j&amp;v=trunkBIL&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.ibm&amp;a=fmcojagt&amp;v=6.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.allfinanz&amp;a=ejbserversupport&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.sun&amp;a=jce&amp;v=1.0&amp;e=zip"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=ssce&amp;a=ssce&amp;v=5.8&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.ibm&amp;a=mq&amp;v=5.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.ibm&amp;a=mqjms&amp;v=5.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=NetServerRemote&amp;a=NetServerRemote&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=NetServerRMI&amp;a=NetServerRMI&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=jwsdp&amp;a=saaj-api&amp;v=1.5&amp;e=jar&amp;c=api"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=jwsdp&amp;a=saaj-impl&amp;v=1.5&amp;e=jar&amp;c=impl"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=org.apache.xmlgraphics&amp;a=fop&amp;v=0.92b&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=xerces&amp;a=dom3-xml-apis&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=org.apache&amp;a=derbynet&amp;v=10.0.2&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.sun&amp;a=jsse&amp;v=1.0&amp;e=jar"/> </resources> </get> </target>

    Read the article

  • Records not being saved to core data sqlite file

    - by esd100
    I'm a complete newbie when it comes to iOS programming and much less Core Data. It's rather non-intuitive for me, as I really came into my own with programming with MATLAB, which I guess is more of a 'scripting' language. At any rate, my problem is that I had no idea what I had to do to create a database for my application. So I read a little bit and thought I had to create a SQL database of my stuff and then import it. Long story short, I created a SQLite db and I want to use the work I have already done to import stuff into my CoreData database. I tried exporting to comma-delimited files and xml files and reading those in, but I didn't like it and it seemed like an extra step that I shouldn't need to do. So, I imported the SQLite database into my resources and added the sqlite framework. I have my core data model setup and it is setting up the SQLite database for the model correctly in the background. When I run through my program to add objects to my entities, it seems to work and I can even fetch results afterward. However, when I inspect the Core Data Database SQLite file, no records have been saved. How is it possible for it to fetch results but not save them to the database? - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ //load in the path for resources NSString *paths = [[NSBundle mainBundle] resourcePath]; NSString *databaseName = @"histology.sqlite"; NSString *databasePath = [paths stringByAppendingPathComponent:databaseName]; [self createDatabase:databasePath ]; NSError *error; if ([[self managedObjectContext] save:&error]) { NSLog(@"Whoops, couldn't save: %@", [error localizedDescription]); } // Test listing all CELLS from the store NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entityMO = [NSEntityDescription entityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; [fetchRequest setEntity:entityMO]; NSArray *fetchedObjects = [[self managedObjectContext] executeFetchRequest:fetchRequest error:&error]; for (CELL *cellName in fetchedObjects) { //NSLog(@"cellName: %@", cellName); } -(void) createDatabase:databasePath { NSLog(@"The createDatabase function was entered."); NSLog(@"The databasePath is %@ ",[databasePath description]); // Setup the database object sqlite3 *histoDatabase; // Open the database from filessytem if(sqlite3_open([databasePath UTF8String], &histoDatabase) == SQLITE_OK) { NSLog(@"The database was opened"); // Setup the SQL Statement and compile it for faster access const char *sqlStatement = "SELECT * FROM CELL"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(histoDatabase)); } if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to cell MO array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { CELL *cellMO = [NSEntityDescription insertNewObjectForEntityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; if (sqlite3_column_type(compiledStatement, 0) != SQLITE_NULL) { cellMO.cellName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; } else { cellMO.cellName = @"undefined"; } if (sqlite3_column_type(compiledStatement, 1) != SQLITE_NULL) { cellMO.cellDescription = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; } else { cellMO.cellDescription = @"undefined"; } NSLog(@"The contents of NSString *cellName = %@",[cellMO.cellName description]); } } // Release the compiled statement from memory sqlite3_finalize(compiledStatement); } sqlite3_close(histoDatabase); } I have a feeling that it has something to do with the timing of opening/closing both of the databases? Attached I have some SQL debugging output to the terminal 2012-05-28 16:03:39.556 MedPix[34751:fb03] The createDatabase function was entered. 2012-05-28 16:03:39.557 MedPix[34751:fb03] The databasePath is /Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/MedPix.app/histology.sqlite 2012-05-28 16:03:39.559 MedPix[34751:fb03] The database was opened 2012-05-28 16:03:39.560 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.575 MedPix[34751:fb03] CoreData: annotation: Connecting to sqlite database file at "/Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/Documents/MedPix.sqlite" 2012-05-28 16:03:39.576 MedPix[34751:fb03] CoreData: annotation: creating schema. 2012-05-28 16:03:39.577 MedPix[34751:fb03] CoreData: sql: pragma page_size=4096 2012-05-28 16:03:39.578 MedPix[34751:fb03] CoreData: sql: pragma auto_vacuum=2 2012-05-28 16:03:39.630 MedPix[34751:fb03] CoreData: sql: BEGIN EXCLUSIVE 2012-05-28 16:03:39.631 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.632 MedPix[34751:fb03] CoreData: sql: CREATE TABLE ZCELL ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZCELLDESCRIPTION VARCHAR, ZCELLNAME VARCHAR ) ... 2012-05-28 16:03:39.669 MedPix[34751:fb03] CoreData: annotation: Creating primary key table. 2012-05-28 16:03:39.671 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_PRIMARYKEY (Z_ENT INTEGER PRIMARY KEY, Z_NAME VARCHAR, Z_SUPER INTEGER, Z_MAX INTEGER) 2012-05-28 16:03:39.672 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_PRIMARYKEY(Z_ENT, Z_NAME, Z_SUPER, Z_MAX) VALUES(1, 'CELL', 0, 0) ... 2012-05-28 16:03:39.701 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), Z_PLIST BLOB) 2012-05-28 16:03:39.702 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.703 MedPix[34751:fb03] CoreData: sql: DELETE FROM Z_METADATA WHERE Z_VERSION = ? 2012-05-28 16:03:39.704 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_METADATA (Z_VERSION, Z_UUID, Z_PLIST) VALUES (?, ?, ?) 2012-05-28 16:03:39.705 MedPix[34751:fb03] CoreData: sql: COMMIT 2012-05-28 16:03:39.710 MedPix[34751:fb03] CoreData: sql: pragma cache_size=200 2012-05-28 16:03:39.711 MedPix[34751:fb03] CoreData: sql: SELECT Z_VERSION, Z_UUID, Z_PLIST FROM Z_METADATA 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Beta Cell 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Gastric Chief Cell ... 2012-05-28 16:03:39.714 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.764 MedPix[34751:fb03] The createDatabase function has finished. Now fetching. 2012-05-28 16:03:39.765 MedPix[34751:fb03] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, t0.ZCELLDESCRIPTION, t0.ZCELLNAME FROM ZCELL t0 2012-05-28 16:03:39.766 MedPix[34751:fb03] CoreData: annotation: sql connection fetch time: 0.0008s 2012-05-28 16:03:39.767 MedPix[34751:fb03] CoreData: annotation: total fetch execution time: 0.0016s for 0 rows. 2012-05-28 16:03:39.768 MedPix[34751:fb03] cellName: <CELL: 0x6bbc120> (entity: CELL; id: 0x6bbc160 <x-coredata:///CELL/t57D10DDD-74E2-474F-97EE-E3BD0FF684DA34> ; data: { cellDescription = "S cells are cells which release secretin, found in the jejunum and duodenum. They are stimulated by a drop in pH to 4 or below in the small intestine's lumen. The released secretin will increase the s"; cellName = "S Cell"; organs = ( ); specimens = ( ); systems = ( ); tissues = ( ); }) ... Sections were cut short to abbreviate. But note that the fetch results contain information, but it says that total fetch execution was for "0" rows? How can that be? Any help will be greatly appreciated, especially detailed explanations. :) Thanks.

    Read the article

< Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >