Search Results

Search found 16101 results on 645 pages for 'owsm webservices ws security ws trust soa secuirty'.

Page 607/645 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • A proper way to create non-interactive accounts?

    - by AndreyT
    In order to use password-protected file sharing in a basic home network I want to create a number of non-interactive user accounts on a Windows 8 Pro machine in addition to the existing set of interactive accounts. The users that corresponds to those extra accounts will not use this machine interactively, so I don't want their accounts to be available for logon and I don't want their names to appear on welcome screen. In older versions of Windows Pro (up to Windows 7) I did this by first creating the accounts as members of "Users" group, and then including them into "Deny logon locally" list in Local Security Policy settings. This always had the desired effect. However, my question is whether this is the right/best way to do it. The reason I'm asking is that even though this method works in Windows 8 Pro as well, it has one little quirk: interactive users from "User" group are still able to see these extra user names when they go to the Metro screen and hit their own user name in the top-right corner (i.e. open "Sign out/Lock" menu). The command list that drops out contains "Sign out" and "Lock" commands as well as the names of other users (for "switch user" functionality). For some reason that list includes the extra users from "Deny logon locally" list. It is interesting to note that this happens when the current user belongs to "Users" group, but it does not happen when the current user is from "Administrators". For example, let's say I have three accounts on the machine: "Administrator" (from "Administrators", can logon locally), "A" (from "Users", can logon locally), "B" (from "Users", denied logon locally). When "Administrator" is logged in, he can only see user "A" listed in his Metro "Sign out/Lock" menu, i.e. all works as it should. But when user "A" is logged in, he can see both "Administrator" and user "B" in his "Sign out/Lock" menu. Expectedly, in the above example trying to switch from user "A" to user "B" by hitting "B" in the menu does not work: Windows jumps to welcome screen that lists only "Administrator" and "A". Anyway, on the surface this appears to be an interface-level bug in Windows 8. However, I'm wondering if going through "Deny logon locally" setting is the right way to do it in Windows 8. Is there any other way to create a hidden non-interactive user account?

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • I can't connect to mysql on a remote server

    - by eisaacson
    I'm trying to connect from an Ubuntu server to a RHEL6 server using mysql. I've tried telneting into the server as well as trying to connect with mysql. I've tried commenting out the bind-address but didn't have any success with that either. I don't get an error code or anything with telnet. It just fails after a minute or so. With mysql, I get this error code ERROR 2003 (HY000): Can't connect to MySQL server on 'SERVER_IP' (111). "SERVER_IP" is of course a placeholder where actual error gives that actual IP. I've included my my.cnf as well as well as my iptables from the destination server. On Destination Server... my.cnf: [mysqld] bind-address=0.0.0.0 tmp_table_size=512M max_heap_table_size=512M sort_buffer_size=32M read_buffer_size=128K read_rnd_buffer_size=256K table_cache=2048 key_buffer_size=512M thread_cache_size=50 query_cache_type=1 query_cache_size=256M query_cache_limit=24M #query_alloc_block_size=128 #query_cache_min_res_unit=128 innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_file_per_table innodb_log_files_in_group=2 innodb_buffer_pool_size=32G innodb_log_file_size=512M innodb_additional_mem_pool_size=20M join_buffer_size=128K max_allowed_packet=100M max_connections=256 wait_timeout=28800 interactive_timeout=3600 # modify isolation method for faster inserting. # Do not uncomment the line below unless you understand what this does. # transaction-isolation = READ-COMMITTED # do not reverse lookup clients skip-name-resolve #long_query_time=6 #log_slow_queries=/var/log/mysqld-slow.log #log_queries_not_using_indexes=On #log_slow_admin_statements=On datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 #Added by Magento ECG long_query_time=1 slow_query_log [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid iptables: :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 225 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp -i eth1 --dport 11211 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT sudo netstat -ntpl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:2123 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:1581 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN - tcp 0 0 :::11211 :::* LISTEN - tcp 0 0 :::22 :::* LISTEN - tcp 0 0 :::225 :::* LISTEN -

    Read the article

  • Share the same subnet between Internal network and VPN Clients

    - by Pascal
    I would like to set up a configuration where VPN clients connecting to my Forefront TMG can access all the resources of my Internal network without having the to use the option "Use default gateway on remote network" on the VPN's TCP/IP Ipv4 Advanced Settings. This is important to me, since they can use their own internet while accessing my network through VPN (the security implications of this are acceptable on my cenario) My Internal network runs on 10.50.75.x, and I set up Forefront TMG to relay the DHCP of my Internal network to the VPN clients, so they get IPs from the same range as the Internal network. This setup initially works, and the VPN clients use their own internet, and can access anything that is on the internal network. However, after a while, HTTP Proxy Traffic from the Internal network starts getting routed to the IP of the RRAS Dial In Interface, instead of the IP of the Internal's network gateway. When this happens, the HTTP Proxy starts getting denied for obvious reasons. My first question is: does this happen because Forefront TMG wasn't designed to handle a cenario that I described above, and it "loses itself"? My second question is: Is there any way to solve this problem, either through configuration or firewall policies? My third question is: If there's no way that it can work with the cenario above, is there another cenario that will solve my problem, and do what I'd like it to do properly? Below are my network routes: 1 => Local Host Access => Route => Local Host => All Networks 2 => VPN Clients to Internal Network => Route => VPN Clients => Internal 3 => Internet Access => NAT => Internal, Perimeter, VPN Clients => External 4 => Internal to Perimeter => Route => Internal, VPN Clients => Perimeter Tks!

    Read the article

  • Windows 7 losing one of my displays after restart

    - by j_kubik
    I have an Intel DZ68BC motherboard with Intel HD graphics card using two monitors (on DVI and on HDMI » VGA). My friend asked me to test if his NVIDIA graphics card works well on my computer (at his it was doing some trouble), so I inserted it in my computer, installed the NVIDIA driver and it worked quite well. Then I removed it, uninstalled everything NVIDIA-related I could find and switched monitors back to my Intel card. Since then after every system start/restart, the system sees only monitor on HDMI » VGA connector, completely ignoring the DVI monitor. I noticed that installing the Intel video drivers causes the system to recognize the second monitor if I don't immediately reboot. After a reboot, the system recognizes only the HDMI » VGA monitor. I also tried starting in safe-mode and using DriveSweeper to remove the remains of NVIDIA drivers. While it seems that some drivers were removed, the situation didn't change. Now I am out of ideas and I really wouldn't like to reinstall the system (again...). I also tried restoring the system to the state before this whole story, but it also didn't change anything. EDIT: I am still trying to troubleshoot this problem. The only point that I could start was driver re-instalation. I traced down the part that restores right settings to a call: C:\Users\Jarek\Desktop\GFX_Win7_64_8.15.10.2696\x64\Drv64.exe -driverinf "C:\Users\Jarek\Desktop\GFX_Win7_64_8.15.10.2696\Graphics\igdlh64.inf" -flags 20 -keypath "Software\Intel\Difx64" This call fixes my displays, and as workaround, I will add it for now to my autorun. I am still looking for better solution anyway... EDIT2: Using DriverView i made a list of currently used drivers both before and after fixing my display using above command. Then i compared logs: No drivers were removed by fixing command. Drivers added by fixing command: MS Remote Access serial network driver (asyncmac.sys) security processor (spsys.sys) Drivers that changed base address (indicates driver-reload?) Canonical Display Driver (cdd.dll) Intel Graphics Kernel Mode Driver (igdkmd64.sys) Monitor Driver (monitor.sys) Added drivers seem rather unrelated to the problem to me, reloaded drivers are just a cnsequence of installing new driver file so there is not much to go here... I really cannot make heads or tails out of it...

    Read the article

  • How to allow unprivileged apache/PHP to do a root task (CentOS)

    - by Chris
    I am setting up a sort of personal dropbox for our customers on a CentOS 6.3 machine. The server will be accessible thru SFTP and a proprietary http service base on PHP. This machine will be in our DMZ so it has to be secure. Because of this I have apache running as an unprivileged user, hardened the security on apache, the OS, PHP, applied a lot of filtering in iptables and applied some restrictive TCP Wrappers. Now you might have suspected this one was coming, SELinux is also set to enforcing. I'm setting up PAM to use MySQL so my users in the web application can login. These users will all be in a group that can use SSH only for SFTP and users will be chrooted to their own 'home' folder. To allow this SELinux wants the folders to have the user_home_t tag. Also the parent directory needs to be writable by root only. If these restrictions are not met SELinux will kill the SSH pipe immediately. The files that need to be accessible thru both http and SFTP so I have made a SELinux module to allow Apache to search/attr/read/write etc. to directories with the user_home_dir_t tag. As sftp users are stored in MySQL I want to setup their home dirs upon user creation. This is a problem since Apache has no write access to the /home dir, it's only writable by root since it's required to keep SELinux and OpenSSH happy. Basically I need to let Apache do only a few tasks as root and only within /home. So I need to somehow elevate the privileges temporarily or let root do these tasks for apache instead. What I need to have apache do with root privileges is the following. mkdir /home/userdir/ mkdir /home/userdir/userdir chmod -R 0755 /home/userdir umask 011 /home/userdir/userdir chcon -R -t user_home_t /home/userdir chown -R user:sftp_admin /home/userdir/userdir chmod 2770 /home/userdir/userdir This would create a home for the user, now I have an idea that might work, cron. That would mean the server needs to check for users that have no home every minute, then when creating users the interface would freeze for an average of 30 seconds before the account creation can be confirmed which I do not prefer. Does anybody know if something can be done with sudoers? Or any other idea's are welcome... Thanks for your time!

    Read the article

  • Recovering from backup without original install media

    - by KGendron
    A machine from my old job had a complete hard drive failure. I have backups but I'm running into severe problems restoring from them. The only install media was a secondary restore partition on the system's hard drive. I hate whoever came up with that idea more than i can possibly express with words. I spent several days trying to recover the disk - it is pretty well shot and none of my best tricks could even get it to show up in the bios/ The machine that broke is an hp with xp media center edition on it (I don't know why either). The backups were created using the default windows backup tool - I have .bfk file on an external hardrive that i am trying to restore from. I've replaced the hard drive. My home machine is running windows 7 64bit and i'm trying to use it as a platform to restore to the other disk. I downloaded the window 7 nt-restore utility, however no matter what i do it restores to my C drive rather than the specified drive. Fortunately win7 security settings prevented it from being a complete disaster - but still not a happy thing. I tried firing up the xp virtual machine. I can browse to the backups but it says they are invalid and refuse to let me view/ continue with the restore. I tried installing XP to an extra harddrive on my machine - however it bluescreens on me during the install process and I cry. I tried installing xp pro to the new drive and attempted to restore over it, it of course blackscreened on me as that was a stupid idea. I made two partitions on the new hard drive (Apparently the bios on this accursed piece of junk doesn't allow hd partitions larger than 200G anyways and thus fails 40 minutes into the install with an ever-descriptive "Disk Read Error". Guess how i spent last weekend? My last idea was to install xp pro to the second partition and then use it to restore from backup to the first. After the first restart it gives me the error "Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware". My brain made one of those bad hard drive clicky noises. I've tried several boot disks but they don't seem to work. If anyone has a link to a good one it would be greatly appreciated. Anyone have any more ideas? - I really hate asking on what seems like such a simple issue but i am quite literally at my wit's end. Thanks - and sorry for the really long post.

    Read the article

  • cd Command Linux and Mystery Flags

    - by Jason R. Mick
    Platform: CentOS 6.2 Shell:tcsh I'm playing around with cd for a BASH script, and noticed the wondrous cd - option, but was left with many questions... Why the cd -? Isn't this redundant with cd ..? EDIT [As FatalError points out, these two commands don't do the same things... so the answer is "no"] Can you delve farther back into your history with - flag, a la in a browser? e.g. When I type cd -, it takes me to my previous directory, but then if I enter that command again, it takes me to the directory I just came from, creating a sort of loop. Is a shorthand for going back multiple levels supported?EDITI realize I can go back with cd .., but was hoping this could be a gateway to a less verbose deep back, e.g. cd -3 vs. cd ../../../ ... hopefully that clarifies what I'm asking....EDIT2As to the current feedback, while .. is a special directory, I don't see a reason why the built-in cd to the terminal couldn't use a shorthand for ../../ ... ../ e.g. cd ..5 or why the built-in also couldn't have a history (a la auto pushd/popd) that could be turned on and used like cd -3. I get that this could be somewhat of security/privacy risk, but I don't see how it's any worst than storing a command history, which most shells/terminals do. The manpage for cd, accessible via man cd and help cd (it's the same for either command), only lists -L and -P flags. However when I type in cd --help it outputs Usage: cd [-plvn][-|<dir>].. Am I right in assuming the other flags and the - (back) option are nonstandard? What are the -n and -v flags for? Both seem to take me back to my home directory, that's all I've been able to figure out via experimentation. A quick read on web resources [1][2] offered just the same sort of info that the man page did and didn't answer my questions. Note: The second Linux-centric resource above claimed cd only had two options (obviously not true in current CentOS) hence my assumption that this functionality could be non-standard.

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • Beginner server local installation

    - by joanjgm
    Here's the thing I own a small business and currently my emails are being managed by some regular hosting using cpanel and that I bought a small server and installed windows server and exchange Can you tell what I did wrong here Installed and configured my current existing domain Configured all email address Installed noip in case my public address change In the cpanel of the domain I've added an MX record to the noip domain of the server with priority 0 so now emails are being received by my own server Now whenever I send an email to anyone gmail hotmail etc I get a response that cannot be delivered since may be junk This didn't happen when I sent emails from the hosting What's missing what did I do wrong heres the code mx.google.com rejected your message to the following e-mail addresses: Joan J. Guerra Makaren ([email protected]) mx.google.com gave this error: [186.88.202.13 12] Our system has detected that this message is likely unsolicited mail. To reduce the amount of spam sent to Gmail, this message has been blocked. Please visit http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for more information. cn9si815432vcb.71 - gsmtp Your message wasn't delivered due to a permission or security issue. It may have been rejected by a moderator, the address may only accept e-mail from certain senders, or another restriction may be preventing delivery. Diagnostic information for administrators: Generating server: SERVERMEGA.megaconstrucciones.com.ve [email protected] mx.google.com #550-5.7.1 [186.88.202.13 12] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. cn9si815432vcb.71 - gsmtp ## Original message headers: Received: from SERVERMEGA.megaconstrucciones.com.ve ([fe80::9096:e9c2:405b:6112]) by SERVERMEGA.megaconstrucciones.com.ve ([fe80::9096:e9c2:405b:6112%10]) with mapi; Thu, 29 May 2014 11:32:19 -0430 From: prueba <[email protected]> To: "Joan J. Guerra Makaren" <[email protected]> Subject: Probando correos Thread-Topic: Probando correos Thread-Index: Ac97V1eW4OBFmoqJTRGoD7IPTC2azg== Date: Thu, 29 May 2014 16:04:35 +0000 Message-ID: <[email protected]> Accept-Language: en-US, es-VE Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Content-Type: multipart/alternative; boundary="_000_000f42494487966276f7b241megaconstruccionescomve_" MIME-Version: 1.0

    Read the article

  • Wireless Network suddenly cant connect after Windows update

    - by vinir
    UPDATE: As my patience started to end, the laptop started to display symptoms of other malfunctions, so I ended up returning it to Asus and actually had the price of the laptop back in store credit. I did not solve the problem per se, but as I don't have the notebook and the screen, the keyboard, the touchpad and other parts were malfunctioning, I can safely assume that it was put to rest. I don't know how to behave when my question isn't actually answered, but was "solved", so I placed this over here. Anyone that knows how to end this topic, I would appreciate the heads up. Thanks for everything, everyone, it's nice to see that this topic in the community was active even when all this time had passed. vinir So I bought an ASUS K43E notebook earlier this year and built a wireless conection to link it to. It worked great for the first weeks, but then I updated my Windows 7 Home Basic with the daily updates; After that my home network couldn't be reached no matter what I did. I have linux on dual boot on the same notebook and it can connect to my home wireless network flawlessly. I have a hunch that it's somehow related to the Network Profile settings. I have noticed my network was set as "Home network", but after the system updates I got changed to "Public". Now I can't connect to it to change the profile settings. My Atheros Network adapter is updated to the latest driver (march 2012), and I still can't connect. The funny thing is that the same thing happened to my mother's notebook, as it has the same Network Adapter, Atheros AR9285, as I recall it. I managed to fix it on my mother's computer by using an specific network LSP and profiling reset that was available through her notebook's antivirus program, avast! Internet Security. I can't get that to work on my notebook, but I suspect that some related tool might just make it work too. So the question is: how to modify a network's profile and settings that were stored in my notebook? I can't connect to the specific network on Windows, as stated before.

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • Apache will not stop/start gracefully

    - by ddjammin
    CentOs 6 64bit running apache 2.2.15-29.el6.centos. When I try to stop/start or restart httpd I get an error that says it has failed. A tail of the error log is below. I also noticed that a httpd.pid file is not created even though it is configured in the main conf file. If I set selinux to permissive, it works just fine. I do not want to run it with selinux disabled. If I delete the SSL_Mutex file it will start. HTTPD was running fine until I tried to add the ssl configuration. I copied over the ssl.conf file from a working server into the conf.d folder. I also copied a sslcert folder into the conf folder. It contains the certs, key, csr and password file. I think the problem has to do with the selinux context for the sslcert folder that was copied but I am not certain and not sure how to fix it. Below is the security context for the sslcert folder after executing restorecon -R sslcert ls -Z -rw-r--r--. root root system_u:object_r:httpd_config_t:s0 httpd.conf -rw-r--r--. root root system_u:object_r:httpd_config_t:s0 magic **drwxr-xr-x. root root system_u:object_r:httpd_config_t:s0 sslcert** tail -f /var/log/httpd/error_log [Thu Oct 17 13:33:19 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Oct 17 13:33:20 2013] [notice] Digest: generating secret for digest authentication ... [Thu Oct 17 13:33:20 2013] [notice] Digest: done [Thu Oct 17 13:33:20 2013] [warn] pid file /etc/httpd/logs/ssl.pid overwritten -- Unclean shutdown of previous Apache run? [Thu Oct 17 13:33:20 2013] [notice] Apache/2.2.15 (Unix) DAV/2 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Thu Oct 17 21:04:48 2013] [notice] caught SIGTERM, shutting down [Thu Oct 17 21:06:42 2013] [notice] **SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0** [Thu Oct 17 21:06:42 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Oct 17 21:06:42 2013] [error] (17)File exists: Cannot create SSLMutex with file `/etc/httpd/logs/ssl_mutex' I also saw mention of possible issues with semaphores. Below is the output of the current semaphores and apache is currently not running. ipcs -s ------ Semaphore Arrays -------- key semid owner perms nsems 0x00000000 0 root 600 1 0x00000000 65537 root 600 1 Finally selinux reports the following error. `sealert -a /var/log/audit/audit.log` 0% donetype=AVC msg=audit(1382034755.118:420400): avc: denied { write } for pid=3393 comm="httpd" name="ssl_mutex" dev=dm-0 ino=9513484 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:httpd_log_t:s0 tclass=file **** Invalid AVC allowed in current policy *** 100% doneERROR: failed to read complete file, 1044649 bytes read out of total 1043317 bytes (/var/log/audit/audit.log) found 1 alerts in /var/log/audit/audit.log -------------------------------------------------------------------------------- SELinux is preventing /usr/sbin/httpd from remove_name access on the directory ssl_mutex.

    Read the article

  • Need a script/batch/program that runs a command that won't be killed when the parent is killed

    - by billc.cn
    The scenario I use Zabbix to monitor my servers and recently I wanted to add some more metrics for the Windows ones. For security reasons, I used Zabbix's User Parameter feature, but it limits the execution of external commands to about 3 seconds. After that, the command is forcibly killed. I want to run some long run commands, so I used the trick from Zabbix's forum: run the command in the background, write the results to a file and use Zabbix to collect them. This is rather easy under *nix thanks to the "&" operator, but there is no such support in Windows' shell. To make things worse, when Zabbix kills forcibly kill the cmd.exe it used to evaluate the commands, all child processes die including the unfinished background tasks. Thus I need something that can sever all the ties with its children so they won't be affected in the cascading kill. What I've tried start and start /B - They do nothing as the child always die with the parent WScript.Shell.Run as in invis.vbs from StackOverflow - Sometimes work. If the wscript process is forcibly killed as opposed to quitting on its own, the children will die as well. hstart - similar results to invis.vbs At command - This requires you to set an absolution time for the task to run as opposed to an offset, so the code would be quite messy due to the limited shell scripting capability of Windows. (Edit) PsExec.exe from the SysInternals suite - It uses a service to launch the command, so it is not affected by the kill; however, it prints some banner and log info to StdErr and there's no switch to disable this. When I use 2>NUL to redirect them, Zabbix reports an error. After trying the above in different combinations, I noticed if I call hstart from invis.vbs, the command started by the former will be left alone as a parent-less process when invis.vbs is killed. However, since I need to redirect the output, the command I want to run is always in the form of cmd.exe /c ""command" "args"" >log. The vbs also removes all the quotes, so I have to encode the command with self-defined escape sequences. The end result involves about five levels of escaping/quoting, which is almost impossible to maintain. Anyone know any better solutions? Some requirements Any bat/vbs/js/Win32 binary is acceptable Better not require multiple levels of escaping No .Net (including PowerShell) because it is not installed

    Read the article

  • How can I minimize the amount my router slows down my Internet connection speed?

    - by Lord Torgamus
    Background I'm working with what I assume is a pretty common Internet setup: a cable modem, a wireless router and a few Internet-connected devices. Lately, I've started being more demanding on my Internet connection, and noticed that using my router slows down my download speeds considerably. I just kind of dealt with it until Zune Marketplace on the Xbox 360 told me that a movie was going to take well over ten hours to download, and I just didn't want to wait that long. Good little scientist that I am, I tried to reduce the problem down to one variable. The test As a control, I turned off all the devices in the house that use wireless Internet, and unplugged all the wired devices except for the Xbox. I also power-cycled both the modem and the router. I then tried to download the movie again, and was told that it would still take over ten hours. Next, I unplugged the router, and connected the Xbox directly to the modem. The movie downloaded in just over one hour. As far as I can tell, this means that my ISP, other cable users near me, the remote servers, anything wireless-related and my machines' disk speeds can't be at fault. A similar experiment that replaced the Xbox with a wired laptop produced similar results. To me, this says "the router is responsible for things taking around ten times longer to download." My question I'd still prefer to use the router for a few reasons: it's a pain to connect and disconnect everything every time there's a big file to download direct connection to the modem isn't good for security only one machine can be connected directly to the modem at a time What can I do to have fast connection speeds while still using the router? I don't mind turning other machines off, as long as I don't have to mess with power and ethernet cables. EDIT : After asking this followup question and then this one, I installed dd-wrt on my router, and I seem to be getting higher and more consistent speeds. Perhaps more importantly, my memory use is fairly constant. I know this isn't an answer — which is why I'm not posting it as an answer — but it is how I resolved the situation, and hopefully it'll be helpful for someone.

    Read the article

  • Loading any MVC page fails with the error "An item with the same key has already been added."

    - by MajorRefactoring
    I am having an intermittent issue that is appearing on one server only, and is causing all MVC pages to fail to load with the error "An item with the same key has already been added." Restarting the application pool fixes the issue, but until then, loading any mvc page throws the following exception: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 10/11/2012 08:09:24 Event time (UTC): 10/11/2012 08:09:24 Event ID: d76264aedc4241d4bce9247692510466 Event sequence: 6407 Event occurrence: 30 Event detail code: 0 Application information: Application domain: /LM/W3SVC/21/ROOT-2-129969647741292058 Trust level: Full Application Virtual Path: / Application Path: d:\websites\SiteAndAppPoolName\ Machine name: UKSERVER Process information: Process ID: 6156 Process name: w3wp.exe Account name: IIS APPPOOL\SiteAndAppPoolName Exception information: Exception type: ArgumentException Exception message: An item with the same key has already been added. Server stack trace: at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add) at System.Linq.Enumerable.ToDictionary[TSource,TKey,TElement](IEnumerable`1 source, Func`2 keySelector, Func`2 elementSelector, IEqualityComparer`1 comparer) at System.Web.WebPages.Scope.WebConfigScopeDictionary.<>c__DisplayClass4.<.ctor>b__0() at System.Lazy`1.CreateValue() Exception rethrown at [0]: at System.Lazy`1.get_Value() at System.Web.WebPages.Scope.WebConfigScopeDictionary.TryGetValue(Object key, Object& value) at System.Web.Mvc.ViewContext.ScopeGet[TValue](IDictionary`2 scope, String name, TValue defaultValue) at System.Web.Mvc.ViewContext.ScopeCache.Get(IDictionary`2 scope, HttpContextBase httpContext) at System.Web.Mvc.ViewContext.GetClientValidationEnabled(IDictionary`2 scope, HttpContextBase httpContext) at System.Web.Mvc.Html.FormExtensions.FormHelper(HtmlHelper htmlHelper, String formAction, FormMethod method, IDictionary`2 htmlAttributes) at System.Web.Mvc.Html.FormExtensions.BeginForm(HtmlHelper htmlHelper, String actionName, String controllerName) at ASP._Page_Views_Dashboard_Functions_BookingQuickLookup_cshtml.Execute() in d:\Websites\SiteAndAppPoolName\Views\Dashboard\Functions\BookingQuickLookup.cshtml:line 3 at System.Web.WebPages.WebPageBase.ExecutePageHierarchy() at System.Web.Mvc.WebViewPage.ExecutePageHierarchy() at System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage) at System.Web.Mvc.Html.PartialExtensions.Partial(HtmlHelper htmlHelper, String partialViewName, Object model, ViewDataDictionary viewData) at ASP._Page_Views_Dashboard_Functions_cshtml.Execute() in d:\Websites\SiteAndAppPoolName\Views\Dashboard\Functions.cshtml:line 5 at System.Web.WebPages.WebPageBase.ExecutePageHierarchy() at System.Web.Mvc.WebViewPage.ExecutePageHierarchy() at System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage) at System.Web.Mvc.Html.RenderPartialExtensions.RenderPartial(HtmlHelper htmlHelper, String partialViewName, Object model) at ASP._Page_Views_Dashboard_Index_cshtml.Execute() in d:\Websites\SiteAndAppPoolName\Views\Dashboard\Index.cshtml:line 9 at System.Web.WebPages.WebPageBase.ExecutePageHierarchy() at System.Web.Mvc.WebViewPage.ExecutePageHierarchy() at System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage) at System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClass1c.<InvokeActionResultWithFilters>b__19() at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) at System.Web.Mvc.Controller.ExecuteCore() at System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) at System.Web.Mvc.MvcHandler.<>c__DisplayClass6.<>c__DisplayClassb.<BeginProcessRequest>b__5() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass1.<MakeVoidDelegate>b__0() at System.Web.Mvc.MvcHandler.<>c__DisplayClasse.<EndProcessRequest>b__d() at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Request information: Request URL: http://SiteAndAppPoolName.spawtz.com/Dashboard Request path: /Dashboard User host address: 86.164.135.41 User: Is authenticated: False Authentication Type: Thread account name: IIS APPPOOL\SiteAndAppPoolName Thread information: Thread ID: 17 Thread account name: IIS APPPOOL\SiteAndAppPoolName Is impersonating: False Stack trace: at System.Lazy`1.get_Value() at System.Web.WebPages.Scope.WebConfigScopeDictionary.TryGetValue(Object key, Object& value) at System.Web.Mvc.ViewContext.ScopeGet[TValue](IDictionary`2 scope, String name, TValue defaultValue) at System.Web.Mvc.ViewContext.ScopeCache.Get(IDictionary`2 scope, HttpContextBase httpContext) at System.Web.Mvc.ViewContext.GetClientValidationEnabled(IDictionary`2 scope, HttpContextBase httpContext) at System.Web.Mvc.Html.FormExtensions.FormHelper(HtmlHelper htmlHelper, String formAction, FormMethod method, IDictionary`2 htmlAttributes) at System.Web.Mvc.Html.FormExtensions.BeginForm(HtmlHelper htmlHelper, String actionName, String controllerName) at ASP._Page_Views_Dashboard_Functions_BookingQuickLookup_cshtml.Execute() in d:\Websites\SiteAndAppPoolName\Views\Dashboard\Functions\BookingQuickLookup.cshtml:line 3 at System.Web.WebPages.WebPageBase.ExecutePageHierarchy() at System.Web.Mvc.WebViewPage.ExecutePageHierarchy() at System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage) at System.Web.Mvc.Html.PartialExtensions.Partial(HtmlHelper htmlHelper, String partialViewName, Object model, ViewDataDictionary viewData) at ASP._Page_Views_Dashboard_Functions_cshtml.Execute() in d:\Websites\SiteAndAppPoolName\Views\Dashboard\Functions.cshtml:line 5 at System.Web.WebPages.WebPageBase.ExecutePageHierarchy() at System.Web.Mvc.WebViewPage.ExecutePageHierarchy() at System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage) at System.Web.Mvc.Html.RenderPartialExtensions.RenderPartial(HtmlHelper htmlHelper, String partialViewName, Object model) at ASP._Page_Views_Dashboard_Index_cshtml.Execute() in d:\Websites\SiteAndAppPoolName\Views\Dashboard\Index.cshtml:line 9 at System.Web.WebPages.WebPageBase.ExecutePageHierarchy() at System.Web.Mvc.WebViewPage.ExecutePageHierarchy() at System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage) at System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClass1c.<InvokeActionResultWithFilters>b__19() at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) at System.Web.Mvc.Controller.ExecuteCore() at System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) at System.Web.Mvc.MvcHandler.<>c__DisplayClass6.<>c__DisplayClassb.<BeginProcessRequest>b__5() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass1.<MakeVoidDelegate>b__0() at System.Web.Mvc.MvcHandler.<>c__DisplayClasse.<EndProcessRequest>b__d() at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Custom event details: As mentioned, it's every MVC action that throws this error until the app pool is restarted, and the error seems to be occurring in System.Web.WebPages.Scope.WebConfigScopeDictionary.TryGetValue(Object key, Object& value) Has anyone seen this issue before? It's only happening on this server, on any of the app pools on the server (not confined to this one) and an app pool restart sorts it. Any help much appreciated. Cheers, Matthew

    Read the article

  • Windows Vista/Win7 Privilege Problem: SeDebugPrivilege & OpenProcess

    - by KevenK
    Everything I've been able to find about escalating to the appropriate privileges for my needs has agreed with my current methods, but the problem exists. I'm hoping maybe someone has some Windows Vista/Win7 internals experience that might shine some light where there is only darkness. I'm sure this will get long, but please bare with me. Context: I'm working on an app that requires accessing the memory of other processes on the current machine. This, obviously, requires administrator rights. It also requires SeDebugPrivilege, which I believe myself to be acquiring correctly, although I question if more privileges aren't necessary and thus the cause of my problems. Code has so far worked successfully on all versions of Windows XP, and on my test Vista32 and Win7x64 environments. Process: Program will Always be run with Administrator Rights. This can be assumed throughout this post. Escalating the current process's Access Token to include SeDebugPrivilege rights. Using EnumProcesses to create a list of current PIDs on the system Opening a handle using OpenProcess with PROCESS_ALL_ACCESS access rights Using ReadProcessMemory to read the memory of the other process. Problem: Everything has been working fine during development and my personal testing (including Windows XP 32 & 64, Windows Vista 32, and Windows 7 x64). However, during a test deployment onto both Windows Vista(32-bit) and Windows 7(64-bit) machines of a colleague, there seems to be a privilege/rights problem with OpenProcess failing with a generic Access Denied error. This occurs both when running as a limited User (as would be expected) and also when run explicitly as Administrator (Right-click Run as Administrator and when run from an Administrator level command prompt). However, this problem has been unreproducible for myself in my test environment. I have witnessed the problem first hand, so I trust that the problem exists. The only difference that I can discern between the actual environment and my test environment is that the actual error is occurring when using a Domain Administrator account at the UAC prompt, whereas my tests (which work with no errors) use a local administrator account at the UAC prompt. It appears that although the credentials being used allow UAC to 'run as administrator', the process is still not obtaining the correct rights to be able to OpenProcess on another process. I am not familiar enough with the internals of Vista/Win7 to know what this might be, and I am hoping someone has an idea of what could be the cause. The Kicker: The person who has reported this error, and who's environment can regularly reproduce this bug, has a small application named along the lines of RunWithDebugEnabled which is a small bootstrap program which appears to escalate its own privileges and then launch the executable passed to it (thus inheriting the escalated privileges). When run with this program, using the same Domain Administrator credentials at UAC prompt, the program works correctly and is able to successfully call OpenProcess and operates as intended. So this is definitely a problem with acquiring the correct privileges, and it is known that the Domain Administrator account is an administrator account that should be able to access the correct rights. (Obviously obtaining this source code would be great, but I wouldn't be here if that were possible). Notes: As noted, the errors reported by the failed OpenProcess attempts are Access Denied. According to MSDN documentation of OpenProcess: If the caller has enabled the SeDebugPrivilege privilege, the requested access is granted regardless of the contents of the security descriptor. This leads me to believe that perhaps there is a problem under these conditions either with (1) Obtaining SeDebugPrivileges or (2) Requiring other privileges which have not been mentioned in any MSDN documentation, and which might differ between a Domain Administrator account and a Local Administrator account Sample Code: void sample() { ///////////////////////////////////////////////////////// // Note: Enabling SeDebugPrivilege adapted from sample // MSDN @ http://msdn.microsoft.com/en-us/library/aa446619%28VS.85%29.aspx // Enable SeDebugPrivilege HANDLE hToken = NULL; TOKEN_PRIVILEGES tokenPriv; LUID luidDebug; if(OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken) != FALSE) { if(LookupPrivilegeValue(NULL, SE_DEBUG_NAME, &luidDebug) != FALSE) { tokenPriv.PrivilegeCount = 1; tokenPriv.Privileges[0].Luid = luidDebug; tokenPriv.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if(AdjustTokenPrivileges(hToken, FALSE, &tokenPriv, 0, NULL, NULL) != FALSE) { // Always successful, even in the cases which lead to OpenProcess failure cout << "SUCCESSFULLY CHANGED TOKEN PRIVILEGES" << endl; } else { cout << "FAILED TO CHANGE TOKEN PRIVILEGES, CODE: " << GetLastError() << endl; } } } CloseHandle(hToken); // Enable SeDebugPrivilege ///////////////////////////////////////////////////////// vector<DWORD> pidList = getPIDs(); // Method that simply enumerates all current process IDs ///////////////////////////////////////////////////////// // Attempt to open processes for(int i = 0; i < pidList.size(); ++i) { HANDLE hProcess = NULL; hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pidList[i]); if(hProcess == NULL) { // Error is occurring here under the given conditions cout << "Error opening process PID(" << pidList[i] << "): " << GetLastError() << endl; } CloseHandle(hProcess); } // Attempt to open processes ///////////////////////////////////////////////////////// } Thanks! If anyone has some insight into what possible permissions/privileges/rights/etc that I may be missing to correctly open another process (Assuming the executable has been properly "Run as Administrator"ed) on Windows Vista and Windows 7 under the above conditions, it would be most greatly appreciated. I wouldn't be here if I weren't absolutely stumped, but I'm hopeful that once again the experience and knowledge of the group shines bright. I thank you for taking the time to read this wall of text. The good intentions alone are appreciated, thanks for being the type of person that makes SO so useful to all!

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • NuSOAP PHP Request From XML

    - by Tegan Snyder
    I'm trying to take the following XML request and convert it to a NuSOAP request and I'm having a bit of difficulty. Could anybody chime in? XML Request: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dto="http://dto.eai.dnbi.com" xmlns:com="http://com.dnbi.eai.service.AccountSEI"> <soapenv:Header> <dto:AuthenticationDTO> <dto:LOGIN_ID>[email protected]</dto:LOGIN_ID> <dto:LOGIN_PASSWORD>mypassword</dto:LOGIN_PASSWORD> </dto:AuthenticationDTO> </soapenv:Header> <soapenv:Body> <com:matchCompany> <com:in0> <!--Optional:--> <dto:bureauName></dto:bureauName> <!--Optional:--> <dto:businessInformation> <dto:address> <!--Optional:--> <dto:city>Bloomington</dto:city> <dto:country>US</dto:country> <dto:state>MN</dto:state> <dto:street>555 Plain ST</dto:street> <!--Optional:--> <dto:zipCode></dto:zipCode> </dto:address> <!--Optional:--> <dto:businessName>Some Company</dto:businessName> </dto:businessInformation> <!--Optional:--> <dto:entityNumber></dto:entityNumber> <!--Optional:--> <dto:entityType></dto:entityType> <!--Optional:--> <dto:listOfSimilars>true</dto:listOfSimilars> </com:in0> </com:matchCompany> </soapenv:Body> </soapenv:Envelope> And my PHP code: <?php require_once('nusoap.php'); $params = array( 'LOGIN_ID' => '[email protected]', 'LOGIN_PASSWORD' => 'mypassword', 'bureauName' => '', 'businessInformation' => array('address' => array('city' => 'Some City'), array('country' => 'US'), array('state' => 'MN'), array('street' => '555 Plain St.'), array('zipCode' => '32423')), array('businessName' => 'Some Company'), 'entityType' => '', 'listOfSimilars' => 'true', ); $wsdl="http://www.domain.com/ws/AccountManagement.wsdl"; $client = new nusoap_client($wsdl, true); $err = $client->getError(); if ($err) { echo '<h2>Constructor error</h2><pre>' . $err . '</pre>'; echo '<h2>Debug</h2><pre>' . htmlspecialchars($client->getDebug(), ENT_QUOTES) . '</pre>'; exit(); } $result = $client->call('matchCompany', $params); if ($client->fault) { echo '<h2>Fault (Expect - The request contains an invalid SOAP body)</h2><pre>'; print_r($result); echo '</pre>'; } else { $err = $client->getError(); if ($err) { echo '<h2>Error</h2><pre>' . $err . '</pre>'; } else { echo '<h2>Result</h2><pre>'; print_r($result); echo '</pre>'; } } echo '<h2>Request</h2><pre>' . htmlspecialchars($client->request, ENT_QUOTES) . '</pre>'; echo '<h2>Response</h2><pre>' . htmlspecialchars($client->response, ENT_QUOTES) . '</pre>'; echo '<h2>Debug</h2><pre>' . htmlspecialchars($client->getDebug(), ENT_QUOTES) . '</pre>'; ?> Am I generating the Header information correctly? I think that may be where I'm off. Thanks,

    Read the article

  • implement N-Tier Entity Framework 4.0 with DTOs

    - by kathy
    Hi, I'm currently building a web based system and trying to implement N-Tier Entity Framework 4.0 with DTOs in a SOA Architecture. I am having a problem understanding how I should implement the Data Access Layer (DAL) , the Business Logic Layer (BLL) and the Presentation Layer. Let’s suppose that I have a “useraccount” entity has the following : Id FirstName LastName AuditFields_InsertDate AuditFields_UpdateDate In the DAL I created a class “UserAccountsData.cs” as the following : using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace OrderSystemDAL { public static class UserAccountsData { public static int Insert(string firstName, string lastName, DateTime insertDate) { using (OrderSystemEntities db = new OrderSystemEntities()) { return Insert(db, firstName, lastName, insertDate); } } public static int Insert(OrderSystemEntities db, string firstName, string lastName, DateTime insertDate) { return db.UserAccounts_Insert(firstName, lastName, insertDate, insertDate).ElementAt(0).Value; } public static void Update(int id, string firstName, string lastName, DateTime updateDate) { using (OrderSystemEntities db = new OrderSystemEntities()) { Update(db, id, firstName, lastName, updateDate); } } public static void Update(OrderSystemEntities db, int id, string firstName, string lastName, DateTime updateDate) { db.UserAccounts_Update(id, firstName, lastName, updateDate); } public static void Delete(int id) { using (OrderSystemEntities db = new OrderSystemEntities()) { Delete(db, id); } } public static void Delete(OrderSystemEntities db, int id) { db.UserAccounts_Delete(id); } public static UserAccount SelectById(int id) { using (OrderSystemEntities db = new OrderSystemEntities()) { return SelectById(db, id); } } public static UserAccount SelectById(OrderSystemEntities db, int id) { return db.UserAccounts_SelectById(id).ElementAtOrDefault(0); } public static List<UserAccount> SelectAll() { using (OrderSystemEntities db = new OrderSystemEntities()) { return SelectAll(db); } } public static List<UserAccount> SelectAll(OrderSystemEntities db) { return db.UserAccounts_SelectAll().ToList(); } } } And in the BLL I created a class “UserAccountEO.cs” as the following : using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Collections; using OrderSystemDAL; namespace OrderSystemBLL { public class UserAccountEO { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public DateTime InsertDate { get; set; } public DateTime UpdateDate { get; set; } public string FullName { get { return LastName + ", " + FirstName; } } public bool Save(ref ArrayList validationErrors) { ValidateSave(ref validationErrors); if (validationErrors.Count == 0) { if (Id == 0) { Id = UserAccountsData.Insert(FirstName, LastName, DateTime.Now); } else { UserAccountsData.Update(Id, FirstName, LastName, DateTime.Now); } return true; } else { return false; } } private void ValidateSave(ref ArrayList validationErrors) { if (FirstName.Trim() == "") { validationErrors.Add("The First Name is required."); } if (LastName.Trim() == "") { validationErrors.Add("The Last Name is required."); } } public void Delete(ref ArrayList validationErrors) { ValidateDelete(ref validationErrors); if (validationErrors.Count == 0) { UserAccountsData.Delete(Id); } } private void ValidateDelete(ref ArrayList validationErrors) { //Check for referential integrity. } public bool Select(int id) { UserAccount userAccount = UserAccountsData.SelectById(id); if (userAccount != null) { MapData(userAccount); return true; } else { return false; } } internal void MapData(UserAccount userAccount) { Id = userAccount.Id; FirstName = userAccount.FristName; LastName = userAccount.LastName; InsertDate = userAccount.AuditFields_InsertDate; UpdateDate = userAccount.AuditFields_UpdateDate; } public static List<UserAccountEO> SelectAll() { List<UserAccountEO> userAccounts = new List<UserAccountEO>(); List<UserAccount> userAccountDTOs = UserAccountsData.SelectAll(); foreach (UserAccount userAccountDTO in userAccountDTOs) { UserAccountEO userAccountEO = new UserAccountEO(); userAccountEO.MapData(userAccountDTO); userAccounts.Add(userAccountEO); } return userAccounts; } } } And in the PL I created a webpage as the following : using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using OrderSystemBLL; using System.Collections; namespace OrderSystemUI { public partial class Users : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { LoadUserDropDownList(); } } private void LoadUserDropDownList() { ddlUsers.DataSource = UserAccountEO.SelectAll(); ddlUsers.DataTextField = "FullName"; ddlUsers.DataValueField = "Id"; ddlUsers.DataBind(); } } } Is the above way the right way to Implement the DTOs pattern in n-tier Architecture using EF4 ??? I would appreciate your help Thanks.

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • No method error in controller create action

    - by user2799827
    I have read a number of Q&As on SO in search of some help on this but have so far not solved my issue. I am trying to teach myself ruby/rails, and as a test project, I want to create a list of tvshows and a list of characters where each tvshow has_many characters and each character belongs_to a specific show. I am sure I am doing something basic incorrectly. Any assistance would be greatly appreciated. here is the characters controller: class CharactersController < ApplicationController before_action :set_character, only: [:show, :edit, :update, :destroy] # GET /characters # GET /characters.json def index @characters = Character.all end # GET /characters/1 # GET /characters/1.json def show end # GET /characters/new def new @character = Character.new end # GET /characters/1/edit def edit end # POST /characters # POST /characters.json def create @character = @tvshow.characters.create(params[:character]) respond_to do |format| if @character.save format.html { redirect_to @character, notice: 'Character was successfully created.' } format.json { render action: 'show', status: :created, location: @character } else format.html { render action: 'new' } format.json { render json: @character.errors, status: :unprocessable_entity } end end end # PATCH/PUT /characters/1 # PATCH/PUT /characters/1.json def update respond_to do |format| if @character.update(character_params) format.html { redirect_to @character, notice: 'Character was successfully updated.' } format.json { head :no_content } else format.html { render action: 'edit' } format.json { render json: @character.errors, status: :unprocessable_entity } end end end # DELETE /characters/1 # DELETE /characters/1.json def destroy @character.destroy respond_to do |format| format.html { redirect_to characters_url } format.json { head :no_content } end end private # Use callbacks to share common setup or constraints between actions. def set_character @character = Character.find(params[:id]) end # Never trust parameters from the scary internet, only allow the white list through. def character_params params.require(:character).permit(:first_name, :last_name, :bio) end end character model: class Character < ActiveRecord::Base belongs_to :tvshow default_scope -> { order('created_at DESC') } validates :tvshow_id, presence: true end tvshow model: class Tvshow < ActiveRecord::Base has_many :characters, dependent: :destroy end error gets returned when I attempt to create a character. here is the full trace: app/controllers/characters_controller.rb:27:in `create' actionpack (4.0.0) lib/action_controller/metal/implicit_render.rb:4:in `send_action' actionpack (4.0.0) lib/abstract_controller/base.rb:189:in `process_action' actionpack (4.0.0) lib/action_controller/metal/rendering.rb:10:in `process_action' actionpack (4.0.0) lib/abstract_controller/callbacks.rb:18:in `block in process_action' activesupport (4.0.0) lib/active_support/callbacks.rb:413:in `_run__1211653665462320621__process_action__callbacks' activesupport (4.0.0) lib/active_support/callbacks.rb:80:in `run_callbacks' actionpack (4.0.0) lib/abstract_controller/callbacks.rb:17:in `process_action' actionpack (4.0.0) lib/action_controller/metal/rescue.rb:29:in `process_action' actionpack (4.0.0) lib/action_controller/metal/instrumentation.rb:31:in `block in process_action' activesupport (4.0.0) lib/active_support/notifications.rb:159:in `block in instrument' activesupport (4.0.0) lib/active_support/notifications/instrumenter.rb:20:in `instrument' activesupport (4.0.0) lib/active_support/notifications.rb:159:in `instrument' actionpack (4.0.0) lib/action_controller/metal/instrumentation.rb:30:in `process_action' actionpack (4.0.0) lib/action_controller/metal/params_wrapper.rb:245:in `process_action' activerecord (4.0.0) lib/active_record/railties/controller_runtime.rb:18:in `process_action' actionpack (4.0.0) lib/abstract_controller/base.rb:136:in `process' actionpack (4.0.0) lib/abstract_controller/rendering.rb:44:in `process' actionpack (4.0.0) lib/action_controller/metal.rb:195:in `dispatch' actionpack (4.0.0) lib/action_controller/metal/rack_delegation.rb:13:in `dispatch' actionpack (4.0.0) lib/action_controller/metal.rb:231:in `block in action' actionpack (4.0.0) lib/action_dispatch/routing/route_set.rb:80:in `call' actionpack (4.0.0) lib/action_dispatch/routing/route_set.rb:80:in `dispatch' actionpack (4.0.0) lib/action_dispatch/routing/route_set.rb:48:in `call' actionpack (4.0.0) lib/action_dispatch/journey/router.rb:71:in `block in call' actionpack (4.0.0) lib/action_dispatch/journey/router.rb:59:in `each' actionpack (4.0.0) lib/action_dispatch/journey/router.rb:59:in `call' actionpack (4.0.0) lib/action_dispatch/routing/route_set.rb:655:in `call' rack (1.5.2) lib/rack/etag.rb:23:in `call' rack (1.5.2) lib/rack/conditionalget.rb:35:in `call' rack (1.5.2) lib/rack/head.rb:11:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/params_parser.rb:27:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/flash.rb:241:in `call' rack (1.5.2) lib/rack/session/abstract/id.rb:225:in `context' rack (1.5.2) lib/rack/session/abstract/id.rb:220:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/cookies.rb:486:in `call' activerecord (4.0.0) lib/active_record/query_cache.rb:36:in `call' activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:626:in `call' activerecord (4.0.0) lib/active_record/migration.rb:369:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call' activesupport (4.0.0) lib/active_support/callbacks.rb:373:in `_run__2792846465963916895__call__callbacks' activesupport (4.0.0) lib/active_support/callbacks.rb:80:in `run_callbacks' actionpack (4.0.0) lib/action_dispatch/middleware/callbacks.rb:27:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/reloader.rb:64:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/remote_ip.rb:76:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/debug_exceptions.rb:17:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call' railties (4.0.0) lib/rails/rack/logger.rb:38:in `call_app' railties (4.0.0) lib/rails/rack/logger.rb:21:in `block in call' activesupport (4.0.0) lib/active_support/tagged_logging.rb:67:in `block in tagged' activesupport (4.0.0) lib/active_support/tagged_logging.rb:25:in `tagged' activesupport (4.0.0) lib/active_support/tagged_logging.rb:67:in `tagged' railties (4.0.0) lib/rails/rack/logger.rb:21:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/request_id.rb:21:in `call' rack (1.5.2) lib/rack/methodoverride.rb:21:in `call' rack (1.5.2) lib/rack/runtime.rb:17:in `call' activesupport (4.0.0) lib/active_support/cache/strategy/local_cache.rb:83:in `call' rack (1.5.2) lib/rack/lock.rb:17:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/static.rb:64:in `call' railties (4.0.0) lib/rails/engine.rb:511:in `call' railties (4.0.0) lib/rails/application.rb:97:in `call' rack (1.5.2) lib/rack/lock.rb:17:in `call' rack (1.5.2) lib/rack/content_length.rb:14:in `call' rack (1.5.2) lib/rack/handler/webrick.rb:60:in `service' /Users/dariusgoore/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/httpserver.rb:138:in `service' /Users/dariusgoore/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/httpserver.rb:94:in `run' /Users/dariusgoore/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/server.rb:191:in `block in start_thread'

    Read the article

  • Installing Lubuntu 14.04.1 fails, upowerd appears to hang

    - by Rantanplan
    On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Thanks for some hints! On Lubuntu 12.04: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise upowerd appears to hang: Aug 25 10:53:28 lubuntu kernel: [ 367.920272] INFO: task upowerd:3002 blocked for more than 120 seconds. Aug 25 10:53:28 lubuntu kernel: [ 367.920288] Tainted: G S C 3.13.0-32-generic #57-Ubuntu Aug 25 10:53:28 lubuntu kernel: [ 367.920294] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 25 10:53:28 lubuntu kernel: [ 367.920300] upowerd D e21f9da0 0 3002 1 0x00000000 Aug 25 10:53:28 lubuntu kernel: [ 367.920314] e21f9dfc 00000086 f5ef7094 e21f9da0 c1050272 c1a8d540 c1920a00 00000000 Aug 25 10:53:28 lubuntu kernel: [ 367.920333] c1a8d540 c1920a00 d9e44da0 f5ef6540 c1129061 00000002 000001c1 0001c37b Aug 25 10:53:28 lubuntu kernel: [ 367.920351] 00000000 00000002 00000000 e2276240 00000000 00000040 c12b0ec5 c19975a8 Aug 25 10:53:28 lubuntu kernel: [ 367.920368] Call Trace: Aug 25 10:53:28 lubuntu kernel: [ 367.920389] [<c1050272>] ? kmap_atomic_prot+0x42/0x100 Aug 25 10:53:28 lubuntu kernel: [ 367.920404] [<c1129061>] ? get_page_from_freelist+0x2a1/0x600 Aug 25 10:53:28 lubuntu kernel: [ 367.920417] [<c12b0ec5>] ? process_measurement+0x65/0x240 Aug 25 10:53:28 lubuntu kernel: [ 367.920432] [<c1654c73>] schedule_preempt_disabled+0x23/0x60 Aug 25 10:53:28 lubuntu kernel: [ 367.920443] [<c16565bd>] __mutex_lock_slowpath+0x10d/0x171 Aug 25 10:53:28 lubuntu kernel: [ 367.920454] [<c1655aec>] mutex_lock+0x1c/0x28 Aug 25 10:53:28 lubuntu kernel: [ 367.920478] [<f857223a>] acpi_smbus_transaction+0x48/0x210 [sbshc] Aug 25 10:53:28 lubuntu kernel: [ 367.920489] [<c11858e1>] ? do_last+0x1b1/0xf60 Aug 25 10:53:28 lubuntu kernel: [ 367.920504] [<f857242f>] acpi_smbus_read+0x2d/0x33 [sbshc] Aug 25 10:53:28 lubuntu kernel: [ 367.920520] [<f881e0f1>] acpi_battery_get_state+0x74/0x8b [sbs] Aug 25 10:53:28 lubuntu kernel: [ 367.920535] [<f881e8a9>] acpi_sbs_battery_get_property+0x2a/0x233 [sbs] Aug 25 10:53:28 lubuntu kernel: [ 367.920549] [<c14fa61f>] power_supply_show_property+0x3f/0x240 Aug 25 10:53:28 lubuntu kernel: [ 367.920561] [<c114664f>] ? handle_mm_fault+0x64f/0x8d0 Aug 25 10:53:28 lubuntu kernel: [ 367.920573] [<c14fa5e0>] ? power_supply_store_property+0x60/0x60 Aug 25 10:53:28 lubuntu kernel: [ 367.920586] [<c1407d20>] ? dev_uevent_name+0x30/0x30 Aug 25 10:53:28 lubuntu kernel: [ 367.920597] [<c1407d38>] dev_attr_show+0x18/0x40 Aug 25 10:53:28 lubuntu kernel: [ 367.920608] [<c11dad15>] sysfs_seq_show+0xe5/0x1c0 Aug 25 10:53:28 lubuntu kernel: [ 367.920621] [<c119846e>] seq_read+0xce/0x370 Aug 25 10:53:28 lubuntu kernel: [ 367.920633] [<c11983a0>] ? seq_hlist_next_percpu+0x90/0x90 Aug 25 10:53:28 lubuntu kernel: [ 367.920644] [<c1179238>] vfs_read+0x78/0x140 Aug 25 10:53:28 lubuntu kernel: [ 367.920654] [<c11799a9>] SyS_read+0x49/0x90 Aug 25 10:53:28 lubuntu kernel: [ 367.920667] [<c165efcd>] sysenter_do_call+0x12/0x28 /var/log/installer/debug shows upower related error: Ubiquity 2.18.8 Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "overlay-scrollbar" ERROR:dbus.proxies:Introspect error on :1.23:/org/freedesktop/UPower: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Exception in GTK frontend (invoking crash handler): Traceback (most recent call last): File "/usr/lib/ubiquity/bin/ubiquity", line 636, in <module> main(oem_config) File "/usr/lib/ubiquity/bin/ubiquity", line 622, in main install(query=options.query) File "/usr/lib/ubiquity/bin/ubiquity", line 260, in install wizard = ui.Wizard(distro) File "/usr/lib/ubiquity/ubiquity/frontend/gtk_ui.py", line 290, in __init__ mod.ui = mod.ui_class(mod.controller) File "/usr/lib/ubiquity/plugins/ubi-prepare.py", line 93, in __init__ upower.setup_power_watch(self.prepare_power_source) File "/usr/lib/ubiquity/ubiquity/upower.py", line 21, in setup_power_watch power_state_changed() File "/usr/lib/ubiquity/ubiquity/upower.py", line 18, in power_state_changed not misc.get_prop(upower, UPOWER_PATH, 'OnBattery')) File "/usr/lib/ubiquity/ubiquity/misc.py", line 809, in get_prop return obj.Get(iface, prop, dbus_interface=dbus.PROPERTIES_IFACE) File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 70, in __call__ return self._proxy_method(*args, **keywords) File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 145, in __call__ **keywords) File "/usr/lib/python3/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout)

    Read the article

  • How To Switch Back to Outlook 2007 After the 2010 Beta Ends

    - by Matthew Guay
    Are you switching back to Outlook 2007 after trying out Office 2010 beta?  Here’s how you can restore your Outlook data and keep everything working fine after the switch. Whenever you install a newer version of Outlook, it will convert your profile and data files to the latest format.  This makes them work the best in the newer version of Outlook, but may cause problems if you decide to revert to an older version.  If you installed Outlook 2010 beta, it automatically imported and converted your profile from Outlook 2007.  When the beta expires, you will either have to reinstall Office 2007 or purchase a copy of Office 2010. If you choose to reinstall Office 2007, you may notice an error message each time you open Outlook. Outlook will still work fine and all of your data will be saved, but this error message can get annoying.  Here’s how you can create a new profile, import all of your old data, and get rid of this error message. Banish the Error Message with a New Profile To get rid of this error message, we need to create a new Outlook profile.  First, make sure your Outlook data files are backed up.  Your messages, contacts, calendar, and more are stored in a .pst file in your appdata folder.  Enter the following in the address bar of an Explorer window to open your Outlook data folder, and replace username with your user name: C:\Users\username\AppData\Local\Microsoft\Outlook Copy the Outlook Personal Folders (.pst) files that contain your data. Its name is usually your email address, though it may have a different name.  If in doubt, select all of the Outlook Personal Folders files, copy them, and save them in another safe place (such as your Documents folder). Now, let’s remove your old profile.  Open Control Panel, and select Mail.  In Windows Vista or 7, simply enter “Mail” in the search box and select the first entry. Click the “Show Profiles…” button. Now, select your Outlook profile, and click Remove.  This will not delete your data files, but will remove them from Outlook. Press Yes to confirm that you wish to remove this profile. Open Outlook, and you will be asked to create a new profile.  Enter a name for your new profile, and press Ok. Now enter your email account information to setup Outlook as normal. Outlook will attempt to automatically configure your account settings.  This usually works for accounts with popular email systems, but if it fails to find your information you can enter it manually.  Press finish when everything’s done. Outlook will now go ahead and download messages from your email account.  In our test, we used a Gmail account that still had all of our old messages online.  Those files are backed up in our old Outlook data files, so we can save time and not download them.  Click the Send/Receive button on the bottom of the window, and select “Cancel Send/Receive”. Restore Your Old Outlook Data Let’s add our old Outlook file back to Outlook 2007.  Exit Outlook, and then go back to Control Panel, and select Mail as above.  This time, click the Data Files button. Click the Add button on the top left. Select “Office Outlook Personal Folders File (.pst)”, and click Ok. Now, select your old Outlook data file.  It should be in the folder that opens by default; if not, browse to the backup copy we saved earlier, and select it. Press Ok at the next dialog to accept the default settings. Now, select the data file we just imported, and click “Set as Default”. Now, all of your old messages, appointments, contacts, and everything else will be right in Outlook ready for you.  Click Ok, and then open Outlook to see the change. All of the data that was in Outlook 2010 is now ready to use in Outlook 2007.  You won’t have to wait to re-download all of your emails from the server since everything’s still here ready to be used.  And when you open Outlook, you won’t see any error messages, either! Conclusion Migrating your Outlook profile back to Outlook 2007 is fairly easy, and with these steps, you can avoid seeing an error message every time you open Outlook.  With all your data in tact, you’re ready to get back to work instead of getting frustrated with Outlook.  Many of us use webmail and keep all of our messages in the cloud, but even on broadband connections it can take a long time to download several gigabytes of emails. Similar Articles Productive Geek Tips Opening Attachments in Outlook 2007 by KeyboardQuickly Create Appointments from Tasks with Outlook 2007’s To-Do BarFix For Outlook 2007 Constantly Asking for Password on VistaPin Microsoft Outlook to the Desktop BackgroundOur Look at the LinkedIn Social Connector for Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >