Search Results

Search found 18003 results on 721 pages for 'nidhinzz own'.

Page 640/721 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • How to configure sendmail to relay through a specific server

    - by ErebusBat
    I have a tiny home server setup behind my cable modem (bresnan communications). I want to be able for this box to send out email (not receive) for notifications and whatnot. What I have already done: I have installed and configured sendmail. I have added mail.bresnan.net as my SMART_HOST directive. What I belive the problem is When I attempt to send an email I get the following in my mail log: Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: from=aburns, size=140, class=0, nrcpts=1, msgid=<[email protected]>, relay=aburns@localhost Dec 22 10:24:17 batcave sm-mta[1531]: oBMHOHWZ001531: from=<[email protected]>, size=397, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: to=<[email protected]>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30140, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBMHOHWZ001531 Message accepted for delivery) Dec 22 10:24:18 batcave sm-mta[1517]: oBMH9mVv001357: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:14:30, xdelay=00:00:42, mailer=relay, pri=300339, relay=pmx0.bresnan.net. [69.145.248.1], dsn=4.0.0, stat=Deferred: Connection timed out with pmx0.bresnan.net. You can see where the message is accepted for delivery by my sendmail server, then where it attempts to hand off to bresnan's server and it timesout. This is where my question is. Asstute readers will notice that pmx0.bresnan.net is not what I have my SMART_HOST directive set as. This is the (outside?) MX server for the bresnan.com/net domain. Apparently bresnan has their network configured so that you can not access this server from within their own network and instead must use the mail.bresnan.net server (which I can connect to). The problem is that I don't know how to tell sendmail to use this server and not the domain. What I have tried Setting a hosts entry so that the pmx0 server points to the mail IP address. This doesn't work, which makes sense as sendmail is obviously doing an MX query to find the server which returns the IP so there is never a need to do a 'normal' DNS resolve so the hosts file never gets involved.

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Syncronizing XML file with MySQL database

    - by Fred K
    My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website. Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML). The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night). Also, each product in their software has one or more images, then I have to do available also the images on the website. Here is an example of an XML export: <?xml version="1.0" encoding="UTF-8"?> <export_management userid="78643"> <product id="1234"> <version>100</version> <insert_date>2013-12-12 00:00:00</insert_date> <warrenty>true</warrenty> <price>139,00</price> <model> <code>324234345</code> <model>Notredame</model> <color>red</color> <size>XL</size> </model> <internal> <color>green</color> <size>S</size> </internal> <options> <s_option>some option</standard_option> <s_option>some option</standard_option> <extra_option>some option</extra> <extra_option>some option</extra> </options> <images>   </images> </product> </export_management> Some ideas for how can I do it? Or if you have better ideas to do that.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • 20GB+ worth of emails in my /home what is a better solution for that?

    - by Skinkie
    My email storage requirements are outgrowing anything reasonable with respect to local mail storage. As we speak 99% of my home partition is filled with personal mail in Thunderbirds mail dirs. Needless to say, this is just painful, badly searchable and as history has proven me that backups work, but Thunderbird is capable of loosing a lot of mail very easily. Currently I have an remote IMAPS server (Dovecot) running for my daily mail, accessible from anywhere, which from my own practice works efficiently up to about 1000 emails. Then some archive directories should be used to move mail around. I have been looking into DBMail, but I wonder if I make my case worse or better which such solution. None of the supported database employ string deduplication or string compression out of the box, so is this going to help me with 20GB+ mail? What about falling back to a plain old IMAP server? A filesystem like ZFS would support stuff like GZIP transparently, which could help. Could someone share their thoughts? The 20GB mostly consists of mailinglists, and normal mail. Not things like attachments. To add some clarifications; As we speak, my mail is not server side indexed at all - only my new mail arrives at a remote IMAP server. It is all local storage from former POP3 accounts, local mirrored Gmail and IMAP accounts. In my perspective it is not Thunderbird that sucks, its fileformat that sucks. Regarding the 1000 mails. On the road I am using Alpine and MobileMail, quite happy with both of them, but some management is required to actually manage the mail. Sieve helps a lot with that, but browing through 10.000 e-mails is not fun, especially not on a mobile client. I am quite happy with Dovecot, never had any issues with it. I just wonder if this is the way to go. Or if there are any other better solutions. What my question is: what is the best practice solution that allows 20GB+ mails and is -on demand remotely accessible, easy to backup and archive worthy. It doesn't need to be available 24x7. The final approach I took was installing a local IMAP server (Dovecot), configured it for being my archive, using the following guide: http://en.gentoo-wiki.com/wiki/Dovecot/InstallThunderbird

    Read the article

  • My VPS ubuntu server is very slow

    - by askmike
    I just installed a frech copy of Ubuntu 12.04 on my vps because my old installation was very slow, unfortunately this did not fix the problem. With slow I mean requests for my PHP websites take a long time, very slow (30 sec per request) to slow (3+ sec per request). When it's really bad SSH is also laggish. The websites are: askmike.org (pretty standard Wordpress) mvr.me (own PHP) slow? very slow: Here is a picture of loading a clean install of wordpress slow: here is a picture of loading a small PHP based website the vps The VPS has 256mb ram and an 25GB hdd. Besides serving the 2 small websites it isn't doing anything AFAIK. What have I installed Clean Ubuntu server 12.04 LAMP stack few things like git and nodejs (not using both) ossec (because I thought my server was getting hammered) munin What I already tried / done I installed munin so that I could watch io speed and such. The problem is that I don't know where to look for in the munin report. I checked logs and don't see anything strange (although I don't really know where to look for besides strange / repetitive errors and GET requests). I configured Apache MPM to: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 40 MaxRequestsPerChild 0 </IfModule> (apache is using prefork, the default) Stats I copied the munin report as it appeared at 4:50 last night to a site hosted on a shared webhost. Note that tonight my mysql crashed somewhere after 1:00 (which is a new problem altogether), so therefor the graph for last night might look strange. Can anyone help me get my VPS up to normal speed? EDIT: Thanks for the replies. The VPS is 10 bucks a month and is from directvps.nl (Dutch host and I'm also dutch). I did two speed tests for disk IO: $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 23.1506 s, 46.4 MB/s $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 39.3796 s, 27.3 MB/s Anyway: how can I prove to my VPS host that it is to slow? I can understand a server being busy slowing a website down. But 5-30 sec loadtime for a normal PHP webpage?

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • IE9 Error: There was a pr?blem sending the command to the program

    - by HK1
    I'm working on a new/fresh Windows 7 32bit machine that now has IE9 installed. The user is using the Dell Stardock application as his primary "desktop" (all his links there). When we place an internet link there and click on it we get the following error message: There was a problem sending the command to the program. To me this indicates that IE9 is having trouble going to the website we want to go to, which should get passed as a parameter to the browser when it opens. I don't think this is a StarDock/ObjectDock problem because we also have some other problems with internet links. For example, we cannot move an internet link from the Desktop to the Quick Launch on the task bar. When we do try, it forces us to put the link with the IE icon as part of the IE menu instead of allowing us to have a shortcut there as it's own entry. I should mention however, that links on the desktop and in the taskbar do work as we expect them too (without showing the above error message). It appears that this problem started after installing Windows Updates. Since we installed a whole bunch of updates at once I have no idea which one caused the problem. I did have Google Chrome installed but I uninstalled it since the user wants to use IE. The problem started before I uninstalled Chrome. I also reset the browser settings on IE9. It didn't help. Next I uninstalled IE9 which took me back to IE8. This actually did resolve the problem but the problem came back as soon as I installed IE9 again. We have Verizon Internet Security installed. It's actually a McAfee product rebranded to look like Verizon. I'm not real crazy over this software but the customer has a subscription so we're not planning to change it. I have no reason to believe that this is causing the problem and yet I know that security software is often to blame for strange issues. I've looked at the registry settings for the following keys and everything appears to be ok for every single one of them: HKEY_CLASSES_ROOT\.htm HKEY_CLASSES_ROOT\.html HKEY_CLASSES_ROOT\http\shell\open\command HKEY_CLASSES_ROOT\http\shell\open\ddeexec\Application HKEY_CLASSES_ROOT\https\shell\open\command HKEY_CLASSES_ROOT\https\shell\open\ddeexec\Application HKEY_CLASSES_ROOT\htmlfile\shell\open\command HKEY_CLASSES_ROOT\Microsoft.Website\Shell\Open\Command Edit1: I've found two potential solutions but I won't be able to try them until tomorrow. One is to disable the "Windows Font Cache" service. Another is to clear IE cache and browsing history. I won't be able to try out either solution until tomorrow since this is a remote client's machine. I see there are lots of other suggestions online but if you take the time to read them through you'll see that the other suggestions didn't fix the problem.

    Read the article

  • Undelivered Mail Returned to Sender

    - by Alex
    When sending to [email protected] via PHP mail() function, I receive mails. When sending emails from external machines, I receive the following (e.g., sending from [email protected]. [mail.ru is Russian gmail]): This is the mail system at host fallback2.mail.ru. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: lost connection with mail.mydomain.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Reporting-MTA: dns; fallback2.mail.ru X-mPOP-Fallback_MX-Queue-ID: D8C19F2411F1 X-mPOP-Fallback_MX-Sender: rfc822; [email protected] Arrival-Date: Tue, 29 Oct 2013 10:09:21 +0400 (MSK) Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 4.4.2 Diagnostic-Code: X-mPOP-Fallback_MX; lost connection with mail.tld.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Here is my postfix main.cf: command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix myhostname = mail.mydomain.com mydomain = mydomain.com myorigin = mydomain.com inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 in_flow_delay = 1s alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mail_name = mydomain.com daemon debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES bounce_queue_lifetime = 4h maximal_queue_lifetime = 4h delay_warning_time = 1h strict_rfc821_envelopes = yes show_user_unknown_table_name = no allow_percent_hack = no swap_bangpath = no smtpd_delay_reject = yes smtpd_error_sleep_time = 20 smtpd_soft_error_limit = 1 smtpd_hard_error_limit = 3 smtpd_junk_command_limit = 2 mydestination = mydomain.com, localhost.localdomain, localhost smtpd_client_restrictions = permit_inet_interfaces smtpd_recipient_limit = 100 virtual_alias_domains = mydomain.com virtual_alias_maps = hash:/etc/postfix/virtual smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination Why emails from external server are not being delivered? Thank you! Update In a log, the following lines appear a lot of times Oct 30 10:48:29 mydomain postfix/smtpd[16216]: connect from fallback5.mail.ru[94.100.176.59] Oct 30 10:48:29 mydomain postfix/smtpd[16216]: warning: SASL: Connect to private/auth failed: Connection refused Oct 30 10:48:29 mydomain postfix/smtpd[16216]: fatal: no SASL authentication mechanisms It appears I have to configure SASL? I would understand if I would like to send emails from postfix, but why do I need it to receive emails?

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • Why is Windows 7 not following all routes?

    - by GigabyteProductions
    My computer is connected to my secondary router that's running the 192.168.42.0/24 network and my computer also has a route that directs anything on that network to the router, but for anything on that network other than the router itself, it get's the ICMP response of Reply from 192.168.42.194: Destination host unreachable. (with 192.168.42.194 being my computer). Every other network works, like all of the internet, or addresses on my primary router like 192.168.1.*, just not on the 192.168.42.0/24 network... route print returns: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.42.1 192.168.42.194 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.42.0 255.255.255.0 On-link 192.168.42.194 276 192.168.42.194 255.255.255.255 On-link 192.168.42.194 276 192.168.42.255 255.255.255.255 On-link 192.168.42.194 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.42.194 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.42.194 276 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.42.1 Default =========================================================================== The only time anything is supposed to send an ICMP Host Unreachable response is when there's no route to it, right? So, why is my own computer sending that to ping or tracert when I have the route of 192.168.42.0 with the mask of 255.255.255.0? An IP address of 192.168.42.2 surely fits into that route. If I explicitly add a route for the IP address i am trying to access, it works, like: route add 192.168.42.2 mask 255.255.255.255 192.168.42.1 (the 192.168.42.1 right after mask is gateway, or the device to send the packet to so it can route it further), but why wont it work for the implicit route that's automatically on the table? I disabled my firewall, too (I use Comodo if anyone thinks this still serves as a problem). I'v even tried explicitly adding the gateway of 192.168.42.1 to the 192.168.42.0/24 route instead of it routing through 0.0.0.0's gateway, which is what On-link does. but that didn't work either, so it's not a gateway specification problem. If the host was really unreachable, it would be the router's IP address (192.168.42.1) sending that to me... This network is all of my creation, so there's no problem such as an administrator locking me out, because i am the administrator.

    Read the article

  • My USB bootable thumb drive, no longer boots on a single particular computer on which it previously worked

    - by LiamMeron
    I created a bootable drive, booting CrunchBang, about 2 months ago. About a month ago, I booted into it on another laptop. After shutting down, I have been unable to get it to boot on my own laptop, despite having worked previously. I can still boot into it on my desktop, and can also boot into it on all other computers that I have tried. If I plug it in when I am running Ubuntu, the Home and / folders mount, the only error being that for some reason my PC likes to try and mount the swap partition too, which naturally, gives an error. The BIOS settings are all still setup to boot from USB. When I boot, all I get is a black screen with the white cursor, it will stay there for as long as I leave it. If it is worth anything, I have GRUB loader installed on the drive. The partitions look a little odd, but I am rather unfamiliar with how they are dealt with. The first partition, /, is sdb1 and has the bootable flag. The second partition is an extended system, and is sdb2. The third partition, according to GParted, seems to be nested under the second. This is the swap partition, and it is sdb5 The fourth partition is my home partition and is sdb6 and is also nested under the extended system. The first and fourth partitions are ext4. I don't know if that helps, but the more info, the better accuracy, generally. Thanks. EDIT: I tried reinstalling GRUB on the drive, but that didn't work. However, when I reinstalled GRUB on my laptop, I did it with my USB thumbdrive in. This caused the GRUB updater to find the /boot folder and add the proper details into my laptop's GRUB loader. I could log into CrunchBang from my laptop's GRUB but I was still unable to boot directly to the drive. It looked like my BIOS is unable to find the bootloader. I am unable to install GRUB to a partition I just created, a /boot partition at the start of the drive, GRUB just doesn't allow it. I think I'm going to have to reinstall #! on my drive, which won't be a great loss as I haven't put much time into it.

    Read the article

  • BlueScreens on my ThinkPad with Windows 7 64 Bit and a SSD (CRITICAL_OBJECT_TERMINATION, ntoskernel.exe)

    - by pvorb
    I'm getting BlueScreens about every five days for more than three months. Here's an example: A problem has been detected and Windows has been shut down to prevent damage to your computer. The problem seems to be caused by the following file: ntoskrnl.exe CRITICAL_OBJECT_TERMINATION If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to make sure any new hardware or software is properly installed. If this is a new installation, ask your hardware or software manufacturer for any Windows updates you might need. If problems continue, disable or remove any newly installed hardware or software. Disable BIOS memory options such as caching or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select Advanced Startup Options, and then select Safe Mode. Technical Information: *** STOP: 0x000000f4 (0x0000000000000003, 0xfffffa80065f2b30, 0xfffffa80065f2e10, 0xfffff80002f9bf40) *** ntoskrnl.exe - Address 0xfffff80002c98d00 base at 0xfffff80002c19000 DateStamp 0x4d9fdd5b It's has always been the same BlueScreen message showing CRITICAL_OBJECT_TERMINATION, 0x000000f4, and ntoskrnl.exe. Of course the addresses change. My computer is a ThinkPad T400 (about 2 years old) with a SSD in it. I'm also running Windows 7 Professional 64 bit. When I bought my computer, it had a 250GByte SeaGate HDD in it, which I replaced by a 500GByte HDD by Western Digital. Last september I bought a Corsair F120 SSD and replaced the HDD by this SSD. Then I bought a LEICKE HDD adapter for the UltraBay II where I plugged in my 500GByte HDD. This configuration ran about half a year without any errors. After re-installing Windows this spring, I am getting regular BlueScreens. Sometimes my system runs for about 2 weeks without a BSOD, sometimes I get several BlueScreens a day. The only thing that I noticed is, that I'm always running Google Chrome when it happens. Is there anyone who has made his/her own bad experiences whith some of my components or is there anybody who can tell me if it would be helpful to send my notebook to Lenovo? Thank you very much for your help on my issue! Regards, Paul

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • debugging JBoss 100% CPU usage

    - by Nate
    We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • Need a script/batch/program that runs a command that won't be killed when the parent is killed

    - by billc.cn
    The scenario I use Zabbix to monitor my servers and recently I wanted to add some more metrics for the Windows ones. For security reasons, I used Zabbix's User Parameter feature, but it limits the execution of external commands to about 3 seconds. After that, the command is forcibly killed. I want to run some long run commands, so I used the trick from Zabbix's forum: run the command in the background, write the results to a file and use Zabbix to collect them. This is rather easy under *nix thanks to the "&" operator, but there is no such support in Windows' shell. To make things worse, when Zabbix kills forcibly kill the cmd.exe it used to evaluate the commands, all child processes die including the unfinished background tasks. Thus I need something that can sever all the ties with its children so they won't be affected in the cascading kill. What I've tried start and start /B - They do nothing as the child always die with the parent WScript.Shell.Run as in invis.vbs from StackOverflow - Sometimes work. If the wscript process is forcibly killed as opposed to quitting on its own, the children will die as well. hstart - similar results to invis.vbs At command - This requires you to set an absolution time for the task to run as opposed to an offset, so the code would be quite messy due to the limited shell scripting capability of Windows. (Edit) PsExec.exe from the SysInternals suite - It uses a service to launch the command, so it is not affected by the kill; however, it prints some banner and log info to StdErr and there's no switch to disable this. When I use 2>NUL to redirect them, Zabbix reports an error. After trying the above in different combinations, I noticed if I call hstart from invis.vbs, the command started by the former will be left alone as a parent-less process when invis.vbs is killed. However, since I need to redirect the output, the command I want to run is always in the form of cmd.exe /c ""command" "args"" >log. The vbs also removes all the quotes, so I have to encode the command with self-defined escape sequences. The end result involves about five levels of escaping/quoting, which is almost impossible to maintain. Anyone know any better solutions? Some requirements Any bat/vbs/js/Win32 binary is acceptable Better not require multiple levels of escaping No .Net (including PowerShell) because it is not installed

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • SharePoint 2010 – Central Admin tooling to create host header site collections

    - by eJugnoo
    Just like SharePoint 2007, you can create host-header based site collections in SharePoint 2010 as well. It means, that you do not necessarily need to create a site-collection under a managed path like /sites/, you can create multiple root-level site collections on same web-application/port by using host-header site collections. All you need to do is point your domain or sub-domain to your web-application and create a matching site-collection that you want. But, just like in 2007, it is something that you do by using STSADM, and is not available on Central Admin UI in 2010 as well. Yeah, though you can now also use PowerShell to create one: C:\PS>$w = Get-SPWebApplication http://sitename   C:\PS>New-SPSite http://www.contoso.com -OwnerAlias "DOMAIN\jdoe" -HostHeaderWebApplication $w -Title "Contoso" -Template "STS#0"   This example creates a host header site collection. Because the template is provided, the root Web of this site collection will be created. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I’ve been playing with WCM in SharePoint 2010 more and more, and for that I preferred creating hosts file entries for desired domains and create site-collections by those headers – in my dev environment. I used PowerShell initially, but then got interested to build my own UI on Central Admin instead. Developed with Visual Studio 2010 So I used new Visual Studio 2010 tooling to create an empty SharePoint 2010 project. Added an application page (there is no option to add _Admin page item in VS 2010 RC), that got created in Layouts “mapped” folder. Created a new Admin mapped folder for 14-“hive”, and moved my new page there instead. Yes, I didn’t change the base class for page, its just that it runs under _admin, but it is indeed a LayoutsPageBase inherited page. To introduce a action-link in Central Admin console, I created following element: 1: <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> 2: <CustomAction 3: Id="CreateSiteByHeader" 4: Location="Microsoft.SharePoint.Administration.Applications" 5: Title="Create site collections by host header" 6: GroupId="SiteCollections" 7: Sequence="15" 8: RequiredAdmin="Delegated" 9: Description="Create a new top-level web site, by host header" > 10: <UrlAction Url="/_admin/OfficeToolbox/CreateSiteByHeader.aspx" /> 11: </CustomAction> 12: </Elements> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Used Reflector to understand any special code behind createpage.aspx, and created a new for our purpose – CreateSiteByHeader.aspx. From there I quickly created a similar code behind, without all the fancy of Farm Config Wizard handling and dealt with alternate implementations of sealed classes! Goal was to create a professional looking and OOB-type experience. I also added Regex validation to ensure user types a valid domain name as header value. Below is the result…   Release @ Codeplex I’ve released to WSP on OfficeToolbox @ Codeplex, and you can download from here. Hope you find it useful… -- Sharad

    Read the article

  • Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider

    - by Gopinath
    If you going to buy iPhone 4S on a two year contract in USA, Europe or Australia you may not find it expensive. But if you are planning to buy it in any other parts of the world, you will definitely feel the heat of ridiculous iPhone 4S price. In India iPhone 4S costs approximately costs $1000 which is 30% more than the price tag of an unlocked iPhone sold in USA. Personally I love iPhones as there is no match for the user experience provided by Apple as well as the wide range of really meaning applications available for iPhone. But it breaks heart to spend $1000 for a phone and I’m forced to look at alternates available in the market. Here are the four iPhone 4S alternates available in almost all the countries where we can buy iPhone 4S Google Galaxy Nexus The Galaxy Nexus is Google’s own Android smartphone manufactured by Samsung and sold under the brand name of Google Nexus. Galaxy Nexus is the pure Android phone available in the market without any bloat software or custom user interfaces like other Androids available in the market. Galaxy Nexus is also the first Android phone to be shipped with the latest version of Android OS, Ice Cream Sandwich. This phone is the benchmark for the rest of Android phones that are going to enter the market soon. In the words of Google this smartphone is called as “Galaxy Nexus: Simple. Beautiful. Beyond Smart.”.  BGR review summarizes the phone as This is almost comical at this point, but the Samsung Galaxy Nexus is my favourite Android device in the world. Easily replacing the HTC Rezound, the Motorola DROID RAZR, and Samsung Galaxy S II, the Galaxy Nexus champions in a brand new version of Android that pushes itself further than almost any other mobile OS in the industry. Samsung Galaxy S II The one single company that is able to sell more smartphones than Apple is Samsung. Samsung recently displaced Apple from the top smartphone seller spot and occupied it with loads of pride. Samsung’s Galaxy S II fits as one the best alternatives to Apple’s iPhone 4S with it’s beautiful design and remarkable performance. Engadget summarizes Samsung Galaxy S2 review as It’s the best Android smartphone yet, but more importantly, it might well be the best smartphone, period. Of course, a 4.3-inch screen size won’t suit everyone, no matter how stupendously thin the device that carries it may be, and we also can’t say for sure that the Galaxy S II would justify a long-term iOS user foresaking his investment into one ecosystem and making the leap to another. Nonetheless, if you’re asking us what smartphone to buy today, unconstrained by such externalities, the Galaxy S II would be the clear choice. Sometimes it’s just as simple as that. Nokia Lumia 800 Here comes unexpected Windows Phone in to the boxing ring. May be they are not as great as Androids available in the market today, but they are picking up very quickly. Especially the Nokia Lumia 800 seems to be first ever Windows Phone 7 aimed at competing serious with Androids and iPhones available in the market. There are reports that Nokia Lumia 800 is outselling all Androids in UK and few high profile tech blogs are calling it as the king of Windows Phone. Considering this phone while evaluating the alternative of iPhone 4S will not disappoint you. We assure. Droid RAZR Remember the Motorola Driod that swept entire Android market share couple of years ago? The first two version of Motorola Droids were the best in the market and they out performed almost every other Android phone those days. The invasion of Samsung Androids, Motorola lost it charm. With the recent release of Droid RAZR, Motorola seems to be in the right direction to reclaiming the prestige. Droid RAZR is the thinnest smartphone available in the market and it’s beauty is not just skin deep. Here is a review of the phone from Engadget blog the RAZR’s beauty is not only skin deep. The LTE radio, 1.2GHz dual-core processor and 1GB of RAM make sure this sleek number is ready to run with the big boys. It kept pace with, and in some cases clearly outclassed its high-end competition. Despite its deficiencies in the display department and underwhelming battery life, the RAZR looks to be a perfectly viable alternative when considering the similarly-pricey Rezound and Galaxy Nexus Further Reading So we have seen the four alternates of iPhone 4S available in the market and I personally love to buy a Samsung smartphone if I’m don’t have money to afford an iPhone 4S. If you are interested in deep diving into the alternates, here few links that help you do more research Apple iPhone 4S vs. Samsung Galaxy Nexus vs. Motorola Droid RAZR: How Their Specs Compare by Huffington Post Nokia Lumia 800 vs. iPhone 4S vs. Nexus Galaxy: Spec Smackdown by PC World Browser Speed Test: Nokia Lumia 800 vs. iPhone 4S vs. Samsung Galaxy S II – by Gizmodo iPhone 4S vs Samsung Galaxy S II by pocket lint Apple iPhone 4S vs. Samsung Galaxy S II by techie buzz This article titled,Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >