Search Results

Search found 10515 results on 421 pages for 'automatically'.

Page 375/421 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • Some services refusing to start on Win 7 machine. What could the root cause be?

    - by BombDefused
    When I check msconfig, there are no services that are blocked from starting up. When I look in services.msc, the problem services have a start up type of 'Automatic', but have a blank space where others will show 'Started'. Attempting to start them manually results in the following pop up error messages. I have no idea what's causing this, looks like some sort of cascade effect from another problem service. It's affecting scheduled tasks, SQL server agent and windows back up services. How can I resolve this? I don't know how to work out what the root cause is. Task Scheduler Service Start Error: "Windows could not start the Task Scheduler service on local computer. 1068: The dependecy service or group failed to start. SQL Server Service Start Error: "The SQL Server Agent service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs." UPDATE: I've just noticed some other services have a description of "Failed to Read Description. Error Code: 2" They are: NetMsmqActivator, NetPipeActivator, NetTcpActivator, NetTcpPortSharing UPDATE 2: As joeqwerty says the Event Log service does seem to be the root of the problem. This service will not start either. It fails with 'Error 31 - A device attached to the system is not functioning correctly'. I've tried detaching all devices. I've also followed the advice here, where the same problem is described, but with no luck: http://social.technet.microsoft.com/Forums/en/w7itprosecurity/thread/44479c49-55e6-4bd7-b25e-3f2a6497306e Update 3 @ Pacey - The following was a good tip, really clear instruction. However, I found that those reg keys do not exist on my system. "Your Problem might also derive from the UpperFilter or LowerFilter Settings of the CDROM Drive. These are a known cause for Errorcode 31. You can find step-by-step instructions on removing the filters on about.com" I followed the advice through to checking every component in device manager separately, but everything is reported as working correctly!? These services did all work at one point. The hardware set up hasn't changed much. Guess I'm looking at a repair install maybe???!

    Read the article

  • Thunderbird 3: can't change column width?

    - by rumtscho
    I recently installed Thunderbird 3.0.3. Just noticed a suboptimal UI setting: in the upper pane, which lists the e-mails in the current folder, the Date column is about 200px wide. So when I keep the window at 480x600, all I see in a row is: | tree icon | favourites icon | attachment icon | read icon | junk icon | Date and time, followed by 5cm whitespace | ... | P Where "P" is the first letter of the name of the sender. And the "..." is actually shown this way, I have no idea which column it is meant to be. But I don't see neither the sender, nor the message subject, which makes scrolling a folder for a certain mail rather pointless. I see these when I maximize the window, actually the columns are then not only bigger, they are arranged in another sequence. But I feel that holding a mail client permanently maximised at 1600x1200 is a waste of screen real estate. My naive solution attempt was to try to go with the mouse cursor to the right edge of the date column and try to shrink it by moving the cursor left while holding down the left mouse button. Not only is this default behaviour for all resizable columns I've ever encountered in GUIs, the cursor actually turns into a horizontal double-headed arrow. But pulling has no effect at all. I cannot make a wide column narrow, and I cannot make the narrow columns wide. I didn't find anything in the preferences either. So can please somebody explain how to get the columns arranged sensibly? Edit: I found out that I only have the problem when I drag the Thunderbird window to a GridMove screen area. It gets automatically resized, but doesn't notice the resize event or something, so the column width remains the same as under a maximized window. First making the window narrow using the mouse helps with column width, but the width of the mail pane is still too wide (rows don't reflow). Anyway, this seems to be a bug caused by the combination of the two applications and not a configuration problem, so I guess I'll have to live with it.

    Read the article

  • freebsd-update from 8.3-RELEASE to 9.0-RELEASE: How to deal with dozens of diffs?

    - by Stefan Lasiewski
    I am upgrading a FreeBSD 8.3-RELEASE system to FreeBSD 9.0-RELEASE using freebsd-update. This is my first time performing a major version upgrade in FreeBSD. At one point in the process, freebsd-update performs a diff on files which are different then what is expected for the 9.0-RELEASE. It compares the current version on the system with the new changes added from 9.0-RELEASE. There are dozens of files in the list. Thus, I am presented with dozens and dozens of diffs which open in a vi window and look like this: The following file could not be merged automatically: /etc/ntp.conf Press Enter to edit this file in vi and resolve the conflicts manually... ### vi window opens <<<<<<< current version driftfile /etc/ntp/drift ======= # # $FreeBSD: release/9.0.0/etc/ntp.conf 195652 2009-07-13 05:51:33Z dwmalone $ # # Default NTP servers for the FreeBSD operating system. # # Don't forget to enable ntpd in /etc/rc.conf with: # ntpd_enable="YES" # # The driftfile is by default /var/db/ntpd.drift, check # /etc/defaults/rc.conf on how to change the location. # >>>>>>> 9.0-RELEASE restrict default notrust nomodify ignore And so on. This requires that I manually edit each file and remove the strings like <<<<<<< current version >>>>>>> 9.0-RELEASE and =======. As I discovered afterwards, if I don't remove these strings, they end up in the file afterwards. There are dozens of files which differ between 8.3 and 9.0, and I have a dozen local modifications myself. It appears that freebsd-update is using a diff, sdiff or mergemaster function of some sort, but I can't tell what it is doing exactly. Processing these files is tedious. Is there a way that I can just say "Accept new version" or "keep old version" or "Your merge is correct"? There has got to be an easier way to deal with these files. I must be missing something. This isn't a huge problem for one machine, but eventually I'll be doing this dozens of times and I want to find an easier way.

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • ISPconfig3 + CentOS 6.2 , confused on how to move forward after initial install?

    - by Damainman
    I installed ISPCONFIG3 on centos 6.2 using the great guide on howtoforge.com. Everything is up and running and I can access ISPCONFIG via a browser. However I am not sure how to move forward with the initial setup so I can setup the very first account and get my website live. Details: Only have 1 server, the centos+ispconfig is running on a virtual machine of XEN XCP. I setup the server name to be server1.mydomain.com. I only have 2 usable ips. I plan to use them as follows: xx.xx.xx.01 : For my website and the websites of all accounts I add. xx.xx.xx.02 : For ns1.mydomain.com and ns2.mydomain.com (Yea I know they should be different ips at different locations, but this is what I have to work with at the moment.... ) I registered the nameservers at my registrar with the .02 ip. I want to use bind and ISPconfig to run the DNS on my server itself and not via my registrar. Right now if I go to the .01 IP it shows the centos+apache successful install page. So to break it down basically I am not sure where to start when it comes to: (What to consider and what to do to setup the first domain on the server) Telling bind to use the name server domains with .02. Setting up my First website(which will be my main website) in ISPconfig so mydomain.com resolves properly to my server. Make it so when you go to the .01 IP, it either redirects or shows the contents of my main website. (If this can't be done, then any advice is appreciated) Making sure that when I add a new domain, it automatically puts in the proper information for the domain so it points to the right mail, database, dns, entry. If I overlooked a tutorial then please feel free to let me know, and any advice would be greatly appreciated. Some of the tutorials I found were not specific to doing everything on only one server with Centos+Apache+Bind. Right now all I did was install centos and install ISPconfig3. Trying to move forward correctly so I don't mess up everything I did by not knowing what to do. Thank you in advance!!

    Read the article

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

  • How do I tell mdadm to start using a missing disk in my RAID5 array again?

    - by Jon Cage
    I have a 3-disk RAID array running in my Ubuntu server. This has been running flawlessly for over a year but I was recently forced to strip, move and rebuild the machine. When I had it all back together and ran up Ubuntu, I had some problems with disks not being detected. A couple of reboots later and I'd solved that issue. The problem now is that the 3-disk array is showing up as degraded every time I boot up. For some reason it seems that Ubuntu has made a new array and added the missing disk to it. I've tried stopping the new 1-disk array and adding the missing disk, but I'm struggling. On startup I get this: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d1 : inactive sdf1[2](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] I have two RAID arrays and the one that normally pops up as md1 isn't appearing. I read somewhere that calling mdadm --assemble --scan would re-assemble the missing array so I've tried first stopping the existing array that ubuntu started: root@uberserver:~# mdadm --stop /dev/md_d1 mdadm: stopped /dev/md_d1 ...and then tried to tell ubuntu to pick the disks up again: root@uberserver:~# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 2 drives (out of 3). So that's started md1 again but it's not picking up the disk from md_d1: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sde1[1] sdf1[2] 3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] md_d1 : inactive sdd1[0](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] What's going wrong here? Why is Ubuntu trying to pick up sdd into a different array? How do I get that missing disk back home again? [Edit] - After adding the md1 to mdadm.conf it now tries to mount the array on startup but it's still missing the disk. If I tell it to try and assemble automatically I get the impression it know it needs sdd but can't use it: root@uberserver:~# mdadm --assemble --scan /dev/md1: File exists mdadm: /dev/md/1 already active, cannot restart it! mdadm: /dev/md/1 needed for /dev/sdd1... What am I missing?

    Read the article

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

  • Set up Gmail with Google apps for own domain

    - by erdomester
    I rent a server from a German company. I have remote access to it as well as WHM and CPanel. I decided to use Google's mail servers for obvious reasons. I am not an admin just an average guy trying to set up what needs to be set up. The problem is I am unable to make the necessary settings. I watched Youtube tutorials, followed written ones as well as Google's help, but there is (at least) one serious problem with my domain settings. The domain console alwasy says Your MX records are incorrect When I check dappwall.com in mxtoolbox.com it says Pref Hostname IP Address TTL 10 mail.dappwall.com 46.4.88.247 24 hrs But this is not the host name. I checked WHM and my hostname is server1.dappwall.com. I can confirm it by typing the hostname command in putty. However, if I do an mx lookup at mxtoolbox.com on server1.dappwall.com or mail.dappwall.com I get Lookup failed after 1 name servers timed out or responded non-authoritatively I ran checks on the google apps toolbox on dappwall.com and two problems emerged: 1.No Google mail exchangers found. Relayhost configuration? 10 mail.dappwall.com In Google Apps > Settings for Gmail > Advanced settings it also says that my current MX records for dappwall.com is Priority Points to 10 MAIL.DAPPWALL.COM. So mail.dappwall.com again. I also have access to a robot provided by the company I rent the server from. Here I see this mail at two places but how should I (if it's necessary) modify this? I set Email routing to Automatically Detect Configuration. 2.There SHOULD be a valid SPF record. "v=spf1 include:_spf.google.com ~all" In the DNS Zone Editor I added this spf record: Name TTL Class Type Record dappwall.com. 1440 IN TXT v=spf1 include:_spf.google.com ~all In the cPanel Email Authentication page it says SPF: Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for dappwall.com. [?] Your current raw SPF record is : v=spf1 include:_spf.google.com ~all How can I confirm that my server is an authoritative nameserver for dappwall.com? In WHM Service Configuration Mailserver selection Dovecot was set but I disabled it (i don't know if that's ok). What am I missing here? Where is that mail.dappwall.com coming from?

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Apache returns 403 Forbidden for alternative port vhost

    - by Wesley
    I'm having an issue getting vhosts to work on Apache 2.2, Debian 6. I have two VirtualHosts, one on port 80 and one on port 8888. The port 80 one has been created automatically by DirectAdmin, the 8888 is a custom one. It's configuration is as follows. <VirtualHost *:8888 > DocumentRoot /home/user/public_html/development ServerName www.myserver.nl ServerAlias myserver.nl <Directory "/home/user/public_html/development"> Options +Indexes +FollowSymLinks +MultiViews AllowOverride All Order Allow,deny Allow from all </Directory> </VirtualHost> Of course I also have a NameVirtualHost *:8888 The port 80 DocumentRoot is /home/user/public_html/production, which is perfectly accessible and works like a charm. The port 8888 docroot of /home/user/public_html/development is 403 forbidden though. I have compared the permissions for both folders. They seem fine to me. drwxr-xr-x 2 root root 4096 Aug 17 16:14 development drwxr-xr-x 4 root root 4096 Aug 18 04:29 production Also, the index.php file which is supposed to display when accessing through port 8888, located in /development/: -rwxr-xr-x 1 root root 41 Aug 17 16:14 index.html I have looked at my error_log and found many of the following entries, only being added to the log file when accessing through port 8888. [Sat Aug 18 04:35:09 2012] [error] [client 27.32.156.232] Symbolic link not allowed or link target not accessible: /home/user/public_html /home/user/public_html is a symbolic link that refers to /home/user/domains/mydomain/public_html. The symbolic link has the following permissions: lrwxrwxrwx 1 admin admin 29 Aug 17 15:56 public_html -> ./domains/mydomain/public_html I'm at a loss. It seems that everything is readable or executable. I've set the Directory to FollowSymLinks in the httpd.conf file, but that doesn't seem to make a difference. If I change that directory tag to <Directory "/home/admin/public_html"> (so it has FollowSymLinks on that as well) it still does not work. Any help is greatly appreciated. If I need to post more information, let me know. I'm pretty much a beginner at this stuff. .. .. UPDATE: I ended up changing the configuration to directly go to the actual path of the files, avoiding the public_html symlink altogether. That worked. Thanks for the suggestions folks. DocumentRoot /home/user/domains/mydomain/public_html/development instead of DocumentRoot /home/user/public_html/development

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • new PC not work with existing router, but works fine when directly connecting to cable modem

    - by user34786
    I bought a new desktop PC (eMachine ET1331G-03W from WalMart) with windows 7 installed, but I can not access internet by connecting to my existing wireless router(LinkSys BEFW11S4) with wired cable. Though all other existing desktops and laptops have no problem connecting to the same router. However, the new desktop PC works fine and able to connect to internet if I bypass the router and directly hook up with the cable modem. At new PC when connecting to the router, I got the below information by typing ipconfig, the IP address looks wrong to me: autoconfiguration IPv4 Address: 169.254.71.140 subnet mask: 255.255.0.0 default gateway: (empty) NetBIOS over Tcpip: Enabled Typing ipconfig at all other desktop and laptop have values like below, which are good to me: Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 192.168.1.140 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 The wireless router was on 192.168.1.1, I do not know why the new desktop got 169.254.71.140 IP? It should have something like 192.168.1.xxx, and it was configured to automatically get IP by DHCP. I have tried to switch cables,power off cable modem, router and reboot new pc many times and got no luck. So I believe this is only an issue related to router or new pc configuration. Can someone help me figure out the issue?

    Read the article

  • Subversion 1.7.x and expat location in configure

    - by ditto
    I am running CentOS 6.3 64bit and DirectAdmin control panel. Currently I have installed Apache Subversion 1.6.19 without any problems. I have installed expat and expat-devel and neon-devel using yum. When installing Apache Subversion 1.6.19 this configure command works fine: ./configure --prefix=/usr --with-ssl --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However when installing Apache Subversion 1.7.7 using the same configure command as above, I get this error after doing commmand "make": /etc/httpd/lib/libaprutil-1.so: undefined reference to `XML_StopParser' collect2: ld returned 1 exit status make: *** [subversion/svnadmin/svnadmin] Error 1 However I found out I can solve that problem by adding this into the configure command: --with-expat=includes:lib_search_dirs:libs So it then looks like this: ./configure --prefix=/usr --with-ssl --with-expat=includes:lib_search_dirs:libs --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However that configure command then give this warning: configure: WARNING: Expat found amongst libraries used by APR-Util, but Subversion libraries might be needlessly linked against additional unused libraries. It can be avoided by specifying exact location of Expat in argument of --with-expat option. So I want to solve that. I have experimentet alot, but not been able to figure out how to "specifying exact location of Expat" in configure command, and how to find out what the location should be? However after a lot of searching I found this: http://subversion.tigris.org/issues/show_bug.cgi?id=3997 - that is a FreeBSD user saying this: Building Subversion 1.7.x on FreeBSD currently requires a configure flag: --with-expat=/usr/local/include:/usr/local/lib:expat As that is the default location of expat on that platform, it would be nice if configure detected it automatically. However I am not using FreeBSD, I am running CentOS 6.3 64bit. Also remember I said I have installed expat and expat-devel and neon-devel using yum. However I tried to use the expat/command path posted by the FreeBSD user, and it seems to work, it does not give errors when running configure command, and does not give errors when running "make". This is what I used then: ./configure --prefix=/usr --with-ssl --with-expat=/usr/local/include:/usr/local/lib:expat --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config But this server is a production server, and therfor I need your help to advice if this is also correct to run on a CentOS server? Is the following path in expat command correct on CentOS?: --with-expat=/usr/local/include:/usr/local/lib:expat If not, please advice what it should be changed to. Thanks in advance for any confirmation or help on this!

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Cannot access boot menu with compaq 8510p

    - by pinouchon
    I have a problem with my HP compaq 8510p laptop: when I start it, the fan starts and the power light is on, but the screen displays nothing. When I insert a bootable hard drive, it activates the hard drive light (meaning that the CD is recognized) but it stops after a few seconds. Same thing with any hard drive: the drive is recognized but does not boot. What I've tried so far: Changing the hard drive or booting with no hard drive (same problem) Plugging anoher display via VGA : no display on the other screen Inserting a windows-7 CD (same problem) Booting only on battery, with battery and power cable, only with power cable (same problem) So it looks like something is preventing the laptop from booting and displaying the boot menu. Do you have experienced something similar with a laptop ? What could be wrong ? The laptop is out of warranty. The system used to be windows-7 x64. Edit: I went to the help desk of my university. A guy took a look (he also tried to plug an external screen) and said that the computer is dead: on the HP laptops eventually the GPU card dies and so does the motherboard because they are linked. He saw this many times, and even if I can fix the problem, the laptop would crash again after a while. Do you have similar experience with HP laptops ? (mine is 4 years old) Edit 2: Believe it or not, my laptop is magically working again. I have no clue about what is going on. Now it is like starting and old car: when you turn it on you secretly hope it will actually start... With that said, I expect my laptop to break again in the near future (its an HP after all) and I will accept an answer or add my own accordingly. Edit 3: As expected, the laptop is down again. This time, sometimes when I power it up, it shuts down automatically after 3 seconds, sometimes not at all. In addition, when it does not shut down on its own, the power button does not work : the only way to shut it down is by unplugging the battery. As before, the screen is black, and only the power and battery lights are on. (the other ones: hard drive and wifi are off). I have tried to plug in another power plug, removing the battery and removing the hard drive without success. I might buy another laptop. I've brought the laptop to a repair shop. The problem is indeed that the graphic card is down. It will be replaced by a new one.

    Read the article

  • How to auto advance a PowerPoint slide after an exit animation is over?

    - by joooc
    PowerPoint entrance animation set up with "Start: With Previous" starts right when a new slide is advanced. However, if you set up an exit animation in the same way, it doesn't start with a slide ending sequence. Instead, the "Start: On Click" trigger needs to be used and after your exit animation is over you still need one extra click just to advance to the next slide. Workarounds to this are obvious: create a duplicate slide, make your ending animations from the original slide being your starting animations on the duplicate slide and let them be followed with whatever you want or create a transition slide with those ending animations only and set up "Change Advance slide - Automatically after - [the time it takes your animations to finish]". These workarounds will make it work for your audience, visually. However, it has an impact on slide numbers you might need to adjust accordingly and/or duplicate content changes. If you are the only one creating and using your presentation, this might be just fine. But if you are creating a presentation in collaborative mode with three other people and don't even know who will be the presenter at the end, you can mess things up. Let's be specific: most of my slides have 0.2s fly in entrance animation applied to blocks of content coming from right, bottom or left. Advancing to the next slide I want them to fly out in another 0.2s exit animation being followed by new slide 0.2s fly in entrance animation of the new blocks. The swapping of the blocks should be triggered while advancing to the next slide, as usually. As mentioned, I'm not able to achieve this without one extra click between the slides. I wrote a VBA script that should start together with an exit animation and will auto advance a slide after 0.3s when the exit animation is over. That way I should get rid of those extra clicks which are needed right now. Sub nextslide() iTime = 0.3 Start = Timer While Timer < Start + iTime DoEvents Wend With SlideShowWindows(1).View .GotoSlide (ActivePresentation.SlideShowWindow.View.Slide.SlideIndex + 1) End With End Sub It works well when binded on a box, button or another object. But I can't make it run on a single click (anywhere on the slide) so that it could start together with the exit animation onclick trigger. Creating a big transparent rectangular shape over the whole slide and binding the macro on it doesn't help either. By clicking it you only get the macro running, exit animation is not triggered. Anyway, I don't want to bind the macro to any other workaround object but the slide itself. Anyone knows how to trigger a PowerPoint VBA script on slide onclick event? Anyone knows a secret setting that will make the exit animation work as expected i.e. animating right before exiting a slide while transitioning to the next one? Anyone knows how to beat this dragon? Thank you!

    Read the article

  • Why is Windows 7 not following all routes?

    - by GigabyteProductions
    My computer is connected to my secondary router that's running the 192.168.42.0/24 network and my computer also has a route that directs anything on that network to the router, but for anything on that network other than the router itself, it get's the ICMP response of Reply from 192.168.42.194: Destination host unreachable. (with 192.168.42.194 being my computer). Every other network works, like all of the internet, or addresses on my primary router like 192.168.1.*, just not on the 192.168.42.0/24 network... route print returns: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.42.1 192.168.42.194 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.42.0 255.255.255.0 On-link 192.168.42.194 276 192.168.42.194 255.255.255.255 On-link 192.168.42.194 276 192.168.42.255 255.255.255.255 On-link 192.168.42.194 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.42.194 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.42.194 276 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.42.1 Default =========================================================================== The only time anything is supposed to send an ICMP Host Unreachable response is when there's no route to it, right? So, why is my own computer sending that to ping or tracert when I have the route of 192.168.42.0 with the mask of 255.255.255.0? An IP address of 192.168.42.2 surely fits into that route. If I explicitly add a route for the IP address i am trying to access, it works, like: route add 192.168.42.2 mask 255.255.255.255 192.168.42.1 (the 192.168.42.1 right after mask is gateway, or the device to send the packet to so it can route it further), but why wont it work for the implicit route that's automatically on the table? I disabled my firewall, too (I use Comodo if anyone thinks this still serves as a problem). I'v even tried explicitly adding the gateway of 192.168.42.1 to the 192.168.42.0/24 route instead of it routing through 0.0.0.0's gateway, which is what On-link does. but that didn't work either, so it's not a gateway specification problem. If the host was really unreachable, it would be the router's IP address (192.168.42.1) sending that to me... This network is all of my creation, so there's no problem such as an administrator locking me out, because i am the administrator.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • windows 7 virtual wireless adapter keeps going to sleep

    - by conners
    Just a quick question that I can't see mentioned anywhere online. I have a Windows 7 box configured like these guys recommend http://www.itgeekdiary.com/windows-7-as-an-wi-fi-access-point/ simply so that I can have my Windows 7 box as a wifi access point or a wifi emitter. It's also called a Microsoft Virtual WiFi Miniport Adapter. But it powers off and shuts down automatically and stops working. Basically everything works as intended and then - well -it will stopped working when I am not at the Windows 7 PC for a long time. The problem seems to be that every time my PC goes to "power save / sleep" and in the morning the Windows 7 machine "wakes" but blooming heck the wifi has stopped and you have to power cycle the PC (which is very uncool). When I power Cycle I have to do the following as administrator C:\Windows\System32\netsh.exe wlan start hostednetwork I then tried a gazllion things involving services and power management and eventually discovered that if I run the following commands as administrator it will be ok (for a bit) but every 3rd ot 4th time I try this "trick" it simply fails. the trick that seems to work 3 out of 4 times (i.e. "most" of the time) C:\Windows\System32\netsh.exe wlan stop hostednetwork C:\Windows\System32\netsh.exe wlan start hostednetwork But why does this only work "some" of the time? What else I did by myself: on every manage adapter properties (that relates to the wifi) I right clicked [configure] [power management] /disabled/ "allow the computer to power off to save power" <- this made no difference Also (and this is a bit annoying) there is no system tray app/GUI for the Microsoft Virtual WiFi Miniport Adapter output signal ... none... so (lame as it sounds) the ONLY way I can check if it's on is to physically go to another device and SCAN.. lame so my question can probably be solved by any of the following: a) can I stop Windows 7 sleeping this wifi when the machine sleeps b) can I force Windows to force wake this process on wake? if so how? c) what is the service / process REALLY called and how do I restart it if it crashes d) how can I flush the wifi properly rather power cycle the host machine e) anyone have a link to an program or app that can sit in the system tray that shows windows 7 wifi hotspot emission status (on/off/etc etc) Since I am a programmer I can easily write a vbs script / windows exe to fix this (and I will share this solution) and the gui problem if I can work out the actual service that is running that netsh stops/starts

    Read the article

  • Cannot get to configure Kerberos for Reporting Services

    - by Ucodia
    Context I am trying to configure Kerberos in the domain for double-hop authentication. So here are the machines and their respective roles: client01: Windows 7 as client dc01: Windows Server 2008 R2 as domain controller and dns server01: Windows Server 2008 R2 as reporting server (native mode) server02: Windows Server 2008 R2 as SQL Server database engine I want my client01 to connect to server01 and configure a data source that is located on server02 using Intergrated Security. So as NTLM cannot push credentials that far, I need to setup Kerberos to enable double-hop authentication. The reporting service is runned by the Network Service service account and is configured only with the RSWindowsNegotiate options for authentication. Issue I cannot get to pass my client01 credential to server02 when configuring the data source on server01. Therefore I get the error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. So I went on dc01 and delegated full trust for any service to server01 but it not fixed the problem. I want to notice that I did not configured any SPNs for server01 because Reporting Service is runned by Network Service and from what I read on the Internet, when Reporting Services is going up with Network Service, SPNs are automatically registered. My problem is that even if that I want to configure SPNs manually, I do not know where I have to set them up. On dc01 or on server01? So I went a bit further on the issue and tried to trace this problem. From my understanding of Kerberos, this is what should happen on the network when I try to connect the data source: client01 ---- AS_REQ ---> dc01 <--- AS_REP ---- client01 ---- TGS_REQ ---> dc01 <--- TGS_REP ---- client01 ---- AP_REQ ---> server01 <--- AP_REP ---- server01 ---- TGS_REQ ---> dc01 <--- TGS_REP ---- server01 ---- AP_REQ ---> server02 <--- AP_REP ---- So captured my local network with Wireshark, but whenever I try to configure my data source from client01 on server01 to pass my credentials to server02, my client never sends a AS_REQ or TGS_REQ to the KDC on dc01. Questions So does anyone can tell me if I should configure the SPNs and on which machine does it have to be configured? Also why client01 never request for a TGT or a TGS to my KDC. Do you think there is something going wrong with the DC role of dc01?

    Read the article

  • Windows 7 immediately disconnects a USB drive

    - by Daniel Saner
    I am having a problem with Windows 7 x64 consistently disconnecting one specific USB mass storage drive immediately after it is connected. The drive in question is a Cowon C2 digital music player which works in standard mass storage controller mode (i.e. no device-specific drivers needed/available). When I connect the player, Windows plays the "USB connect" sound and the device appears (under its correct name) in the device manager, but it never appears as a drive. The player itself displays "USB Connected" for a split-second before reporting that it has been disconnected again. Since the player, by design, reboots after it has been disconnected, Windows plays the "USB disconnect" sound before restarting the whole cycle once the player has powered back on. I am connecting the player through an Intel X79 Chipset motherboard (Gigabyte GA-X79-UD3) to Windows 7 Pro 64-bit. The player used to work fine the first few times I connected it, showing up as an external drive; it only recently stopped working. It is not a problem with the player, since it works fine when connected to another computer, even such running the exact same operating system. It is also not a problem with the USB controller, since the issue is the same on both the Intel USB 2.0 and the Fresco Logic FL1009 USB 3.0 controller ports. I have also not had the problem with any other drive so far. Among the things I have tried so far: Disabling USB legacy mode in BIOS Disabling energy-saving power down for all USB controllers in Windows' device manager Removing and reinstalling Windows' USB mass storage driver Removing and reinstalling Intel and Fresco Logic USB controller driver Restoring the player to factory defaults None of these made a difference. Again, the player used to work fine on the exact same system just days ago; I didn't install any new hardware or drivers on it since then. I would be very grateful for any hints on what else to try. Edit: Here is another new hint; I found out that when I connect the drive before booting Windows, it is available in Windows Explorer as it should, and does not automatically disconnect. If I remove and reconnect it though, the infinite connect/disconnect-loop starts anew.

    Read the article

  • March 21st Links: ASP.NET, ASP.NET MVC, AJAX, Visual Studio, Silverlight

    - by ScottGu
    Here is the latest in my link-listing series. If you haven’t already, check out this month’s "Find a Hoster” page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.  [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET URL Routing in ASP.NET 4: Scott Mitchell has a nice article that talks about the new URL routing features coming to Web Forms applications with ASP.NET 4.  Also check out my previous blog post on this topic. Control of Web Control ClientID Values in ASP.NET 4: Scott Mitchell has a nice article that describes how it is now easy to control the client “id” value emitted by server controls with ASP.NET 4. Web Deployment Made Awesome: Very nice MIX10 talk by Scott Hanselman on the new web deployment features coming with VS 2010, MSDeploy, and .NET 4.  Makes deploying web applications much, much easier. ASP.NET 4’s Browser Capabilities Support: Nice blog post by Stephen Walther that talks about the new browser definition capabilities support coming with ASP.NET 4. Integrating Twitter into an ASP.NET Website: Nice article by Scott Mitchell that demonstrates how to call and integrate Twitter from within your ASP.NET applications. Improving CSS with .LESS: Nice article by Scott Mitchell that describes how to optimize CSS using .LESS – a free, open source library. ASP.NET MVC Upgrading ASP.NET MVC 1 applications to ASP.NET MVC 2: Eilon Lipton from the ASP.NET team has a nice post that describes how to easily upgrade your ASP.NET MVC 1 applications to ASP.NET MVC 2.  He has an automated tool that makes this easy. Note that automated MVC upgrade support is also built-into VS 2010.  Use the tool in this blog post for updating existing MVC projects using VS 2008. Advanced ASP.NET MVC 2: Nice video talk by Brad Wilson of the ASP.NET MVC team.  In it he describes some of the more advanced features in ASP.NET MVC 2 and how to maximize your productivity with them. Dynamic Select Lists with ASP.NET MVC and jQuery: Michael Ceranski has a nice blog post that describes how to dynamically populate dropdownlists on the client using AJAX. AJAX Microsoft AJAX Minifier: We recently shipped an updated minifier utility that allows you to shrink/minify both JavaScript and CSS files – which can improve the performance of your web applications.  You can run this either manually as a command-line tool or now automatically integrate it using a Visual Studio build task.  You can download it for free here. Visual Studio VS 2010 Tip: Quickly Closing Documents: Nice blog post that describes some techniques for optimizing how windows are closed with the new VS 2010 IDE. Collpase to Definitions with Outlining: Nice tip from Zain on how to collapse your code editor to outline mode using Ctrl + M, Ctrl + O.  Also check out his post on copy/paste with outlining here. $299 VS 2010 Upgrade Offer for VS 2005/2008 Standard Users: Soma blogs about a nice VS 2010 upgrade offer you can take advantage of if you have VS 2005 or VS 2008 Standard editions.  For $299 you can upgrade to VS 2010 Professional edition. Dependency Graphics: Jason Zander (who runs the VS team) has a nice blog post that covers the new dependency graph support within VS 2010.  This makes it easier to visualize the dependencies within your application.  Also check out this video here. Layer Validation: Jason Zander has a nice blog post that talks about the new layer validation features in VS 2010.  This enables you to enforce cleaner layering within your projects and solutions.  VS 2010 Profiler Blog: The VS 2010 Profiler Team has their own blog and on it you can find a bunch of nice posts from the last few months that talk about a lot of the new features coming with VS 2010’s Profiler support.  Some really nice features coming. Silverlight Silverlight 4 Training Course: Nice free set of training courses from Microsoft that can help bring you up to speed on all of the new Silverlight 4 features and how to build applications with them.  Updated and current with the recently released Silverlight 4 RC build and tools. Getting Started with Silverlight and Windows Phone 7 Development: Nice blog post by Tim Heuer that summarizes how to get started building Windows Phone 7 applications using Silverlight.  Also check out my blog post from last week on how to build a Windows Phone 7 Twitter application using Silverlight. A Guide to What Has Changed with the Silverlight 4 RC: Nice summary post by Tim Heuer that describes all of the things that have changed between the Silverlight 4 Beta and the Silverlight 4 RC. Path Based Layout - Part 1 and Part 2: Christian Schormann has a nice blog post about a really cool new feature in Expression Blend 4 and Silverlight 4 called Path Layout. Also check out Andy Beaulieu’s blog post on this. Hope this helps, Scott

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >