Search Results

Search found 11295 results on 452 pages for 'tony day'.

Page 416/452 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • Some Memory Slots Not Working on MSI FM2-A85XA-G65 Motherboard

    - by Mike Ciaraldi
    Short version of question: Does anyone have an MSI FM2-A85XA-G65 motherboard, who can confirm that all four memory slots work? Long version: Several months ago I bought an MSI FM2-A85XA-G65 motherboard at Newegg. At that point I installed an AMD A8-5500 processor and two sticks of Corsair Vengeance 8 GB DDR3-1866 memory, and put it into my file server. I installed the RAM in slots 1 and 3, as directed in the manual, to enable dual-channel memory access. It seemed to work fine, so I bought a second identical mobo (which arrived dead, but was quickly replaced by Newegg) and set of RAM, installed an A10-5800K, and put that into my production Linux machine. Again, it seemed to work well. Eventually I happened to notice that on the server only 8 GB of RAM appeared in the BIOS. I tried each of the slots and memory modules individually and in various combinations. I even swapped processors with the production machine. The result was that putting memory in slots 1 and 2 worked (showing a total of 16 GB), but any memory in slots 3 or 4 was not recognized. However, all four memory slots in the production machine worked, and I confirmed this with both processors. I contacted MSI and arranged to ship the defective mobo back to them for replacement under warranty. I did not want my file server to be down in the interim, and I had another machine I wanted to upgrade, so I bought a third identical mobo to use. That one had the same problem -- only memory slots 1 and 2 worked. I tested it thoroughly with multiple processors and memory sticks. I sent the defective mobo back to MSI and they sent me a new one. This has the same memory slot problem. So I sent it back. The replacement arrived the other day and shows the same problem. I contacted MSI yet again and they said that nobody else has reported memory slot problems on this board and it must be my processor. So my score so far is, out of six boards of this model, I have: One where all four slots work. One which was dead on arrival. Four where only memory slots 1 and 2 work. Before I tear my other machines apart and start swapping processors again I thought I would ask if anyone else has this exact model motherboard and could confirm that all four memory slots either do or do not work. According to MSI you should be able to just plug a single memory module into any of the slots and it will work (and it does on the one mobo I have which works correctly). If you have not yet used all four slots, this is a good time to test them so you know if you can expand your memory in the future. Thanks in advance to anyone who can help.

    Read the article

  • arp "who-has tell" on cloned machine

    - by mcmorry
    I have a urgent problem to solve today, but I'm lost. Please help. I've cloned a Virtual Machine hosted on VM Ware ESXi 4.1 The OS is now Ubuntu Server 12.04 LTS, but at the time of cloning it was 10.04 LTS. I fixed the MAC address manually inside /etc/udev/rules.d/70-persistent-net.rules. It is a known problem on Ubuntu. I had to remove the old MAC address and set the new one as eth0. Everything seems to work fine, except ARP. My provider OVH sent me a warning to resolve it today (this is the second day) or they will block my IP! The log contains many lines like this: Tue Jun 5 01:04:29 2012 : arp who-has 178.32.136.212 tell 178.32.136.224 where .224 is the cloned server that is causing problems, and .212 is the cloned one. arp -na returns: ? (178.33.230.254) at 00:07:b4:00:00:02 [ether] on eth0 ? (178.32.136.212) at 00:50:56:09:8e:f1 [ether] on eth0 The first IP is the ESXi machine. The second one should not be there. I'm not an expert and I don't know what else to do to fix this problem. Any help will be very appreciated. Thanks. EDIT: ifcofig on .224: eth0 Link encap:Ethernet HWaddr 00:50:56:01:32:c6 inet addr:178.32.136.224 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe01:32c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:399924 errors:0 dropped:465 overruns:0 frame:0 TX packets:241884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:58006071 (58.0 MB) TX bytes:663603166 (663.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:516216 errors:0 dropped:0 overruns:0 frame:0 TX packets:516216 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:236284275 (236.2 MB) TX bytes:236284275 (236.2 MB) ifconfig on .212: eth0 Link encap:Ethernet HWaddr 00:50:56:09:8e:f1 inet addr:178.32.136.212 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe09:8ef1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16014 errors:0 dropped:0 overruns:0 frame:0 TX packets:14511 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15134444 (15.1 MB) TX bytes:2683025 (2.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9944 errors:0 dropped:0 overruns:0 frame:0 TX packets:9944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1139347 (1.1 MB) TX bytes:1139347 (1.1 MB)

    Read the article

  • New harddrives failing within weeks.

    - by Jason Kealey
    I've experienced 8 hard disk failures in 3 months and have tried many things to solve the issue permanently but I have failed. I would like to know if you have any advice for me. System was running Win XP on an Asus P5W-DH Deluxe. I have setup a RAID-1 array. I started out with 2 x 500 GB 7200RPM Western Digital drives. One died. I took it out to RMA it. On the same day, the router was fried. Assumed a power surge occurred; connected an older UPS to protect the system. Once I got my hands on an identical disk, I installed it. The RAID array was rebuilt. A few days later, the other one died. Assumed the rebuild caused it to fail. Took it out for RMA. Before the other one arrived, the remaining one died. I then discovered I could re-enable them using the Intel Matrix Storage Manager. I re-enabled both and the system seemed fine for a week, until both died again. I got two new 1.5 TB 7200RPM Seagate drives and re-installed Windows 7. Also replaced the UPS and power supply. They both died again. The voltage on the plug is stable between 120 and 122V as per the UPS. None of the other devices have had any problems (monitors, etc.). At this point, I see two options: a) electrical issue in the house that was, for some reason, not blocked by the UPS. b) something else inside the system causing surges? motherboard? onboard raid controller? Failures happen fairly quickly, between 2 and 14 days after I fix the previous issue. I just gotten a new computer (Core i7) to replace it. If it is stable, I can determine that b) was the problem. If it fries its hard drive again, I can determine that it is an electrical issue in the house. Do you have any other thoughts? Any tools I can run on the drives that failed to get more information about the original SMART event history?

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • Syncronizing XML file with MySQL database

    - by Fred K
    My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website. Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML). The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night). Also, each product in their software has one or more images, then I have to do available also the images on the website. Here is an example of an XML export: <?xml version="1.0" encoding="UTF-8"?> <export_management userid="78643"> <product id="1234"> <version>100</version> <insert_date>2013-12-12 00:00:00</insert_date> <warrenty>true</warrenty> <price>139,00</price> <model> <code>324234345</code> <model>Notredame</model> <color>red</color> <size>XL</size> </model> <internal> <color>green</color> <size>S</size> </internal> <options> <s_option>some option</standard_option> <s_option>some option</standard_option> <extra_option>some option</extra> <extra_option>some option</extra> </options> <images>   </images> </product> </export_management> Some ideas for how can I do it? Or if you have better ideas to do that.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Mouse Error Code 24. Windows 7

    - by Cj.
    I've had the same mouse for a while, and it's been working fine until one day, it started giving me a message about a device not working properly. I tried updating the drivers, and re-installing, I even deleted old drivers in case my computer should be a little confused. It never made a difference, and my mouse seemed to be working just fine despite getting the permanent error in my device manager, I looked it up several times online, but I never found anything I could actually use, when I go to official websites, I always get the same response "plug in so so into a different place - drivers - install silverlight before you can watch this tutorial, try it on a different machine". so I gave up on that. But now is where I have a real problem, lately, my little strange error evolved into a fullblown Error 24, and my mouse is starting to turn on and off randomely, especially when it is being used, but I do hear it go "badum..dadum" when I'm off doing something else. when I looked up error code 24, I really didn't find much other than it meaning: Code 24 This device is not present, is not working properly, or does not have all its drivers installed. (Code 24) Cause The device is installed incorrectly. The problem could be a hardware failure, or a new driver might be needed. Devices stay in this state if they have been prepared for removal. After you remove the device, this error disappears. But, I have tried uninstalling the device entirely several times, and it'll go right back to its previous state with error 24, and turning on and off randomely. what do I do? I cannot afford taking it to a repair place, I can't really afford a new mouse either, I refuse to buy cheap ones as I am a gamer, in need of more than 3 buttons, and a good grip is important. Could there possibly be some confusion in the registry? I do remember having gotten some early problems after I converted my vista to windows7. But I hardly dare going in there unless I'm 100% certain of what I'm going for, and I can honestly say I am at a loss here. Edit: it is a USB mouse we're talking about here. MX™518 Optical Gaming Mouse (logitech) Edit2: I am seeing no rupture, so it must be on the inside of my mouse, or inside the rubber, protecting the cable, that would be really inconvenient to search for

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • Mysterious visitor to hidden PHP page

    - by B. VB.
    On my website, I have a "hidden" page that displays a list of the most recent visitors. There exist no links at all to this single PHP page, and, theoretically, only I know of its existence. I check it many times per day to see what new hits I have. However, about once a week, I get a hit from a 208.80.194.* address on this supposedly hidden page (it records hits to itself). The strange thing is this: this mysterious person/bot does not visit any other page on my site. Not the public PHP pages, but only this hidden page that prints the visitors. It's always a single hit, and the HTTP_REFERER is blank. The other data is always some variation of Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; YPC 3.2.0; FunWebProducts; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.4; yplus 5.1.04b) ... but sometimes MSIE 6.0 instead of 7, and various other plug ins. The browser is different every time, as with the lowest-order bits of the address. And it's just that. One hit per week or so, to that one page. Absolutely no other pages are touched by this mysterious vistor. Doing a whois on that IP address showed it's from the new york area, and from the "Websense" ISP. The lowest order 8 bits of their address are always different, but always from 208.80.194.*/8. From most of the computers that I access my website, doing a tracerout to my server does not contain a router anywhere along the way with the IP 208.80.*. So that rules out any kind of HTTP sniffing, I might think. I have NO idea how, why this is happening. Does anyone have any clue, or have seen something as strange as this before? It seems completely benign, but unexplainable and a little creepy. Thanks in advance!

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Mac OSX 10.8 Server DNS Domain Routing

    - by Oldek
    I just cant seem to figure out the logic in how to configure my Mac Server. So I have set up an DNS, which will take the domain and all subdomains and point towards an IP. File: db.mydomain.com (in /var/named/) mydomain.com. 10800 IN SOA mydomain.com. admin.mydomain.com. ( 2012110903 ; serial 3600 ; refresh (1 hour) 900 ; retry (15 minutes) 1209600 ; expire (2 weeks) 86400 ; minimum (1 day) ) 10800 IN NS mydomain.com. 10800 IN A 10.0.1.2 www.mydomain.com. 10800 IN A 10.0.1.2 So I want all of these requests to be requested to the 10.0.1.2 server, as I run 2 servers in my cluster. This one has always handled the requests, and now I want to add a server in between. So the server in between will get all the signals from my router which NAT the trafic coming from outside. So after setting this up and trying to point my port 80 towards my new server which will be the middle point, it doesn't work. Is it even possible to do it this way? First server: Mac Second server: Linux So what I try to achieve once more: 1. User goes to mydomain.com or www.mydomain.com 2. User request gets handled by my first server 3. First server refers to a local server, which is only available locally (it is configured to allow requests on port 80 and handle them) 4. Second server receives signal 5. Second server returns a request (either directly send to user or send through first server, whichever is most secure and configurable) I also want to be able to set up domains that lead to other servers in the future, and some that are only available within the VPN. (If that changes anything) I hope some kind soul could help me with this, it is really cumbersome for my mind to get the logic here. Do I have to configure my other server in any way? /Marcus

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • BlueScreens on my ThinkPad with Windows 7 64 Bit and a SSD (CRITICAL_OBJECT_TERMINATION, ntoskernel.exe)

    - by pvorb
    I'm getting BlueScreens about every five days for more than three months. Here's an example: A problem has been detected and Windows has been shut down to prevent damage to your computer. The problem seems to be caused by the following file: ntoskrnl.exe CRITICAL_OBJECT_TERMINATION If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to make sure any new hardware or software is properly installed. If this is a new installation, ask your hardware or software manufacturer for any Windows updates you might need. If problems continue, disable or remove any newly installed hardware or software. Disable BIOS memory options such as caching or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select Advanced Startup Options, and then select Safe Mode. Technical Information: *** STOP: 0x000000f4 (0x0000000000000003, 0xfffffa80065f2b30, 0xfffffa80065f2e10, 0xfffff80002f9bf40) *** ntoskrnl.exe - Address 0xfffff80002c98d00 base at 0xfffff80002c19000 DateStamp 0x4d9fdd5b It's has always been the same BlueScreen message showing CRITICAL_OBJECT_TERMINATION, 0x000000f4, and ntoskrnl.exe. Of course the addresses change. My computer is a ThinkPad T400 (about 2 years old) with a SSD in it. I'm also running Windows 7 Professional 64 bit. When I bought my computer, it had a 250GByte SeaGate HDD in it, which I replaced by a 500GByte HDD by Western Digital. Last september I bought a Corsair F120 SSD and replaced the HDD by this SSD. Then I bought a LEICKE HDD adapter for the UltraBay II where I plugged in my 500GByte HDD. This configuration ran about half a year without any errors. After re-installing Windows this spring, I am getting regular BlueScreens. Sometimes my system runs for about 2 weeks without a BSOD, sometimes I get several BlueScreens a day. The only thing that I noticed is, that I'm always running Google Chrome when it happens. Is there anyone who has made his/her own bad experiences whith some of my components or is there anybody who can tell me if it would be helpful to send my notebook to Lenovo? Thank you very much for your help on my issue! Regards, Paul

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Synergy 1.5 crash (OSX 10.6.8)

    - by Oliver
    THANKS FOR TAKING THE TIME TO READ THIS I recently installed Synergy 1.5 r2278 (for Mac OSX 10.6.8) and was using it fine for most of the day, then it decided to stop working (the only thing I changed systemwise was the screensaver - and then after it started crashing disabled it - to see if it would resolve). When I start Synergy (on the Mac - Client) it says: after about 5 seconds (and successfully connecting to the Server) "synergyc quit unexpectedly" Here is the crash log (w/ binery info removed - too long for post requirements) Process: synergyc [1026] Path: /Applications/Synergy.app/Contents/MacOS/synergyc Identifier: synergy Version: ??? (???) Code Type: X86 (Native) Parent Process: Synergy [1023] Date/Time: 2014-05-28 15:36:17.746 +0930 OS Version: Mac OS X 10.6.8 (10K549) Report Version: 6 Interval Since Last Report: 2144189 sec Crashes Since Last Report: 23 Per-App Interval Since Last Report: 10242 sec Per-App Crashes Since Last Report: 9 Anonymous UUID: 86D5A57C-13D4-470E-AC72-48ACDDDE5EB0 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 5 Application Specific Information: abort() called Thread 0: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x95cf3afa mach_msg_trap + 10 1 libSystem.B.dylib 0x95cf4267 mach_msg + 68 2 com.apple.CoreFoundation 0x95af02df __CFRunLoopRun + 2079 3 com.apple.CoreFoundation 0x95aef3c4 CFRunLoopRunSpecific + 452 4 com.apple.CoreFoundation 0x95aef1f1 CFRunLoopRunInMode + 97 5 com.apple.HIToolbox 0x93654e04 RunCurrentEventLoopInMode + 392 6 com.apple.HIToolbox 0x93654bb9 ReceiveNextEventCommon + 354 7 com.apple.HIToolbox 0x937dd137 ReceiveNextEvent + 83 8 synergyc 0x000356d0 COSXEventQueueBuffer::waitForEvent(double) + 48 9 synergyc 0x00010dd5 CEventQueue::getEvent(CEvent&, double) + 325 10 synergyc 0x00011fb0 CEventQueue::loop() + 272 11 synergyc 0x00044eb6 CClientApp::mainLoop() + 134 12 synergyc 0x0005c509 standardStartupStatic(int, char**) + 41 13 synergyc 0x000448a9 CClientApp::runInner(int, char**, ILogOutputter*, int (*)(int, char**)) + 137 14 synergyc 0x0005c4b0 CAppUtilUnix::run(int, char**) + 64 15 synergyc 0x000427df CApp::run(int, char**) + 63 16 synergyc 0x00006e65 main + 117 17 synergyc 0x00006dd9 start + 53 Thread 1: 0 libSystem.B.dylib 0x95d607da __sigwait + 10 1 libSystem.B.dylib 0x95d607b6 sigwait$UNIX2003 + 71 2 synergyc 0x00009583 CArchMultithreadPosix::threadSignalHandler(void*) + 67 3 libSystem.B.dylib 0x95d21259 _pthread_start + 345 4 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 2: 0 libSystem.B.dylib 0x95d21aa2 __semwait_signal + 10 1 libSystem.B.dylib 0x95d2175e _pthread_cond_wait + 1191 2 libSystem.B.dylib 0x95d212b1 pthread_cond_timedwait$UNIX2003 + 72 3 synergyc 0x00009476 CArchMultithreadPosix::waitCondVar(CArchCondImpl*, CArchMutexImpl*, double) + 150 4 synergyc 0x0002b18f CCondVarBase::wait(double) const + 63 5 synergyc 0x0002ce68 CSocketMultiplexer::serviceThread(void*) + 136 6 synergyc 0x0002d698 TMethodJob<CSocketMultiplexer>::run() + 40 7 synergyc 0x0002b8f4 CThread::threadFunc(void*) + 132 8 synergyc 0x00008f30 CArchMultithreadPosix::doThreadFunc(CArchThreadImpl*) + 80 9 synergyc 0x0000902a CArchMultithreadPosix::threadFunc(void*) + 74 10 libSystem.B.dylib 0x95d21259 _pthread_start + 345 11 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 3: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x95d1a382 kevent + 10 1 libSystem.B.dylib 0x95d1aa9c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x95d19f59 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x95d19cfe _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x95d19781 _pthread_wqthread + 390 5 libSystem.B.dylib 0x95d195c6 start_wqthread + 30 Thread 4: 0 libSystem.B.dylib 0x95d19412 __workq_kernreturn + 10 1 libSystem.B.dylib 0x95d199a8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x95d195c6 start_wqthread + 30 Thread 5 Crashed: 0 libSystem.B.dylib 0x95d610ee __semwait_signal_nocancel + 10 1 libSystem.B.dylib 0x95d60fd2 nanosleep$NOCANCEL$UNIX2003 + 166 2 libSystem.B.dylib 0x95ddbfb2 usleep$NOCANCEL$UNIX2003 + 61 3 libSystem.B.dylib 0x95dfd6f0 abort + 105 4 libSystem.B.dylib 0x95d79b1b _Unwind_Resume + 59 5 synergyc 0x00008fd1 CArchMultithreadPosix::doThreadFunc(CArchThreadImpl*) + 241 6 synergyc 0x0000902a CArchMultithreadPosix::threadFunc(void*) + 74 7 libSystem.B.dylib 0x95d21259 _pthread_start + 345 8 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 5 crashed with X86 Thread State (32-bit): eax: 0x0000003c ebx: 0x95d60f39 ecx: 0xb0288a7c edx: 0x95d610ee edi: 0x00521950 esi: 0xb0288ad8 ebp: 0xb0288ab8 esp: 0xb0288a7c ss: 0x0000001f efl: 0x00000247 eip: 0x95d610ee cs: 0x00000007 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x002fe000 Model: MacBook2,1, BootROM MB21.00A5.B07, 2 processors, Intel Core 2 Duo, 2.16 GHz, 2 GB

    Read the article

  • Recovering damaged external hard disk by installing internally

    - by nfarshchi
    I had a 1TB Western Digital (My book series) 3.5" USB3. One day, the SATA to USB3 converter board was damaged and has not worked since. I decided to open the cover and use the HDD as an internal HDD. When I attached the HDD to my PC and booted up in Windows, it asked me which type of ????? I want to use "MBR or GBR" (I dont remember the exact question) I chose MBR and Windows gave me a 1TB empty Hard drive. I tried to recover with recover my files and some other recovery programs but no success. Some one told me that you should choosed GBR instead of MBR . How can I do that now? Another guy told me that the SATA to USB3 converter board is coded to save data on HDD and you can not use them internally without losing data, and I should find another SATA to USB3 board (exact same). It is impossible to find because they are not produced any more. Please help me to find a solution to bring back my data. UPDATE I have 1TB WD "Mybook" USB 3. the board that convert sata to usb3 was damaged. so when the HDD was in the box computer did not recognize it. I opened the box and remove HDD to use it internal. after connecting to my PC windows showed me one massage that I had two choice MBR or GPT I choosed MBR one and windows gave me 1TB empty new volume. I tried many recovery software to recover my data but no success. I brought it to one expert recovery company and they told me the converter board (SATA to USB3) make some encryption on data and with out that board you cannot recover any thing. so I bought another empty WD box and put the HDD inside but even after that also there is no file. I tried to recover again in this state but no success. so I have some unanswered question. does this converted boards make any password or encryption? if yes how can I solve it? does using many recovery programs affected my data? any suggestion or solution for bring back my data? I had use recovery programs such as : recover my files , EaseUS data recovery, easy recovery, test disk, Ontrack easy recovery . Note: when I was using test disk it asked me to choose which partition table I want to use. as it was I choose NTFS, does this made any change on data?

    Read the article

  • Asterisk: Dropping calls with an "ast_yyerror"

    - by Nick
    I'm having an intermittent issue where asterisk will play our greeting to the caller, and then drop the call instead of making our phones ring. I'm unable to reproduce the problem with any phones I have here, and many callers get through just fine. Some callers though, run into the problem, and I can't find any pattern to it. The bit of information I could find said it was caused by an error in evaluating a dialplan expression. I'm thinking it's this line: exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) But I'm not sure what's wrong with it. I see the following error on the console: [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input:=TRUE^ Surrounding Console output: -- Executing [START@AGInbound:1] Answer("IAX2/AtlantaTeliax-10086", "") in new stack -- Executing [START@AGInbound:2] BackGround("IAX2/AtlantaTeliax-10086", 0000_AG_THANK_YOU_FOR_CALLING_AG") in new stack -- Playing '0000_AG_THANK_YOU_FOR_CALLING_AG.slin' (language 'en') [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input: =TRUE ^ [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:463 ast_yyerror: If you have questions, please refer to doc/tex/channelvariables.tex in the asterisk source. -- Executing [START@AGInbound:3] GotoIf("IAX2/AtlantaTeliax-10086", "?CLOSED,1") in new stack -- Executing [START@AGInbound:4] GotoIfTime("IAX2/AtlantaTeliax-10086", "9:30-17:0|mon-fri|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:5] GotoIfTime("IAX2/AtlantaTeliax-10086", "10:0-18:30|sat|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:6] GotoIfTime("IAX2/AtlantaTeliax-10086", "12:0-17:0|sun|*|*?OPEN,1") in new stack Relevant lines from the dial plan: exten = START,1,Answer() exten = START,n,Background(0000_AG_THANK_YOU_FOR_CALLING_AG) ; See if we're open ; Force Closed if no one's going to be answering exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) exten = START,n,GotoIfTime(${AG_WEEKDAY_OPEN_HOUR}:${AG_WEEKDAY_OPEN_MIN}-${AG$ exten = START,n,GotoIfTime(${AG_SATURDAY_OPEN_HOUR}:${AG_SATURDAY_OPEN_MIN}-${$ exten = START,n,GotoIfTime(${AG_SUNDAY_OPEN_HOUR}:${AG_SUNDAY_OPEN_MIN}-${AG_S$ ; ...and we're not. But maybe the time of day has been overridden? exten = START,n,GotoIf($[${OVERRIDE_TIME_OF_DAY}=TRUE]?OPEN,1) ; No override... We're definatly closed. exten = START,n,Goto(CLOSED,1) Any idea what's wrong with the expression? We recently upgraded from 1.4 to 1.6.

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • Windows Update and IE fail to connect, but Chrome fine?

    - by I Gottlieb
    Out of ideas on this one. (Running Windows Vista.) I have a program that accesses the internet to retrieve financial market data. One day it tells me that it can't log in -- timeout error. I check the documentation and it says must have a working copy of IE browser installed. I check IE (have IE9) and sure enough -- it just spins. No error message, not timeout, no 'try later' -- just spins -- as far as I can tell, indefinitely. Any page, any address. Even access to a localhost site just spins. Chrome works fine. So does another program I have that fetches market data. Windows 'diagnose and repair' says my internet connection is working fine. I tried uninstall/re-install of IE. Same spinning. I tried to install Windows Updates, and guess what? I can't. I comes up with error 80072efd; checked documentation for the error and it says I should check firewall blockage. Thing is, the only firewall I have is Windows Firewall, and obviously it wouldn't be blocking Windows Update. In contrast, Windows 'Help' in all programs has no problem accessing the Internet. I had a filter on the internet connection, and this was updated just prior to first appearance of the problem. But I uninstalled the filter entirely (official, with passwd from the company's service rep) -- and no difference. I'm guessing that a high level Windows network service file is corrupted -- used only by MS programs and their ilk, but how do I find it? I'd like to avoid having to do a clean install of Windows. Much obliged for any insight. IG Ramhound -- Thanks for reply. I'm familiar with virtual machines as in e.g. JVM or an emulator for an alternative architecture or (theoretical) Turing Machine equivalence. But I'm not familiar with the way you're using the term. Please clarify -- what one needs for this VM 'test' and why you expect it will provide an advantage of insight into the problem. And what sort of 'configuration issue' are you referring to? IG

    Read the article

  • Migrating Split Access Database from one domain to another (not working, details in Q)

    - by Expo_Rob
    Some background: I'm a programmer, not a network administrator, who has been asked to migrate some accounting software (Integrated Office Accounting version 3.2) from an existing domain (OLD_NETWORK) to a new domain (NEW_NETWORK). No-body at the office knows how it works under the hood. It is a split Access 2000 database with the back-end shared and on a file server (which is also the DC) using mapped drives. The DC is NT Server 4 SP 6. The new server is server 2003. The two networks are running independently (ie: two computers on each desk). I have been able to get new computers set up on NEW_NETWORK and working with the IOA software just perfectly but for one problem: The company here uses other entirely separate databases which access the tables IOA maintains (specifically the 'customers' table) via links. To switch between these systems, you press F11 then File-Open the appropriate database and away you go (this is necessary to maintain the permissions that the IOA system uses to protect the customers table). The entire database is Access 2000, the links go to other Access databases, SQL-Server is not involved in any way, nor is a migration to SQL server likely. If I can't migrate anything over, everything will stay as it is, and the NEW_NETWORK computers will not be used. The problem: When I try and update these seperate databases (I shall call one "BANK_ACCOUNT", but the name does not matter), it says "this recordset cannot be updated". It also will sometimes not pull information out of the 'customers' table (ie: date_entered) when looking at a report of everyone who opened a bank account on a certain day (ie: today). I have tried: Giving 'everyone' full control via. shared directory permissions Giving 'everyone' full control on a file system level Checking the permissions within Access (everyone has full read/write on all tables) Copying the entire server contents from one file server to another (ie: xcopy everything) Copying the entire local client files from one computer to another, putting them in the exact same position in the file system, with the same permissons (or full control to 'everyone'). Running as an Administrator Taking one of the NEW_NETWORK computers, having it join OLD_NETWORK and run the software (direct copy from a working system with identical drive mappings), this did not work Weeping openly My Question: Is there anything else I can try? (sorry for this being so long)

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • Google Rules for Retail

    - by David Dorf
    In the book What Would Google Do?, Jeff Jarvis outlines ten "Google Rules" that define how Google acts.  These rules help define how Web 2.0 businesses operate today and into the future.  While there's a chapter in the book on applying these rules to the retail industry, it wasn't very in-depth.  So I've decided to more directly apply the rules to retail, along with some notable examples of success.  The table below shows Jeff's Google Rule, some Industry Examples, and New Retailer Rules that I created. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-themecolor:text1; mso-border-alt:solid black .5pt; mso-border-themecolor:text1; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insideh-themecolor:text1; mso-border-insidev:.5pt solid black; mso-border-insidev-themecolor:text1; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Google Rule Industry Examples New Retailer Rule New Relationship Your worst customer is your friend; you best customer is your partner Newegg.com lets manufacturers respond to customer comments that are critical of the product, and their EggXpert site lets customers help other customers. Listen to what your customers are saying about you.  Convert the critics to fans and the fans to influencers. New Architecture Join a network; be a platform Tesco and BestBuy released APIs for their product catalogs so third-parties could create new applications. Become a destination for information. New Publicness Life is public, so is business Zappos and WholeFoods founders are prolific tweeters/bloggers, sharing their opinions and connecting to customers.  It's not always pretty, but it's genuine. Be transparent.  Share both your successes and failures with your customers. New Society Elegant organization Wet Seal helps their customers assemble outfits and show them off to each other.  Barnes & Noble has a community site that includes a bookclub. Communities of your customers already exist, so help them organize better. New Economy Mass market is dead; long live the mass of niches lululemon found a niche for yoga inspired athletic wear.  Threadless uses crowd-sourcing to design short-runs of T-shirts. Serve small markets with niche products. New Business Reality Decide what business you're in When Lowes realized catering to women brought the men along, their sales increased. Customers want experiences to go with the products they buy. New Attitude Trust the people and listen In 2008 Starbucks launched MyStartbucksIdea to solicit ideas from their customers. Use social networks as additional data points for making better merchandising decisions. New Ethic Be honest and transparent; don't be evil Target is giving away reusable shopping bags for Earth Day.  Kohl's has outfitted 67 stores with solar arrays. Being green earns customers' respect and lowers costs too. New Speed Life is live H&M and Zara keep up with fashion trends. Be prepared to pounce on you customers' fickle interests. New Imperatives Encourage, enable and protect innovation 1-800-Flowers was the first do sales in Facebook and an early adopter of mobile commerce.  The Sears Personal Shopper mobile app finds products based on a photo. Give your staff permission to fail so innovation won't be stifled. Jeff will be a keynote speaker at Crosstalk, our upcoming annual user conference, so I'm looking forward to hearing more of his perspective on retail and the new economy.

    Read the article

  • Aggregate SharePoint Event/Items into your Calendar view using Calendar Overlay

    - by eJugnoo
    One of the most common features I have seen in common use for SharePoint (prior to 2010) in Intranet environments for Team site is Calendar’s. Not only the Calendar list type, but also the ability to add a Calendar view to any list that has the desired columns to construct a Calendar – such as Start, End, Title etc. While this was all great for a single site/calendar, the problem of having to track numerous calendar’s remained. With introduction of Outlook 2007 bi-directional integration with SharePoint, and particularly the ability of Outlook to overlay calendar helped bridge the gap. Now one could connect to number of team sites, and setup Calendar overlays in Outlook using varying colours, to easily identify event source and yet benefit from the plotting of events on single Calendar view. This was all good, but each user in your Enterprise was supposed to setup in a “pull” fashion. This is good for flexibility, not so good when you need to “push” consistency and productivity (re-use). So, what was missing on SharePoint is the ability to have server-side overlay’s that everyone can see – in a single place, aggregating multiple sources. Until SharePoint 2010 arrived! Calendars Overlay in SharePoint 2010 There are Calendar lists and Calendar views. View can be created for almost all lists, as far as you have desired column’s in a list like Start, End, Title etc. to be able to describe and plot an item in a Calendar format. In SharePoint 2010, create a new Calendar list. Go to Calendar ribbon tab, and click Calendar Overlay. You get the screen with list of existing Overlay’s associated with current Calendar (list – in our case). Click on “New Calendar”… Notice the breadcrumb! You are adding Overlay to existing list (Team Calendar – in our case). You have choice of “pulling” Calendar info from an existing Calendar (list/view) in SharePoint or even from Exchange! Set standard info like a name, description and decide the colour you want for the items in aggregated Calendar overlay. Select the source site/list/view, anywhere in farm. When you select Exchange as source of Calendar, you get option to add OWA and Exchange Web Service url. I will cover details of connecting with Exchange in another post, and focus on Overlay’s with SharePoint for this one. Once you have added a new Calendar overlay to existing Calendar veiw, you get something like below for Day view, Week view, and Month view respectively Notice the Overlay colours: Now, if you decide to connect this Calendar to Outlook to sync the items, it will only sync items from main view, and not from Overlay source. So such Overlay of calendar’s is server-side aggregation only. That increases my curiosity, so I try adding the Calendar list view as a web-part on a new page. As you see, this instance of view didn’t include item from source that we had added to default Calendar view. This is – probably – due to the fact that this is a new web-part view for the page. If you want to add overlay to this one, you have to redo that from Ribbon. This also means, subject to purpose and context you get the flexibility to decide what overlay is suited. Also you can only add 10 Overlay’s to an existing view instance. Conclusion Calendar Overlay is clearly a very useful feature that fills a gap of not being able to aggregate information from multiple sources into a Calendar view within context of current items. Source of items can be existing SharePoint calendar views on any site, or even Exchange (via OWA/Exchange web services). List type for source doesn’t matter, it just need a Calendar view type available. You can have 10 overlays. Overlays are for the specific view only, and are server-side only – which means they do not get synced in Outlook. While you can drag-drop current list items, you cannot edit overlay items as they are read-only within scope of current Calendar view. You can of course click on source Overlay item to edit at the source. I’d like to hear, how you think Overlay’s will help you in your case, or how you are already using them... Enjoy SharePoint! --Sharad

    Read the article

  • Adding A Custom Dropdown in RCDC for Forefront Identity Manager 2010

    - by Daniel Lackey
    My latest exploration has been FIM 2010 for Identity Management. The following is a post of how to add a custom dropdown for the FIM Portal. I have decided to document this as I cannot find documentation on how to do this anywhere else. I hope that it finds useful to others.   For starters, this was to me not an easy task to figure out. I really would like to know why it is so cumbersome to do something that seems like a lot of people would need to do, but that’s for another day J   The dropdown I wanted to add was for ‘Account Status’ which would display if the account is ‘Enabled’ or ‘Disabled’ in the data source Active Directory. This option would also allow helpdesk users or admins to administer the userAccountControl attribute in AD from the FIM Portal interface.   The first thing I had to do was create the attribute itself. This is done by going to Administration à Schema Management from the FIM 2010 portal. Once here, you click on All Attributes. What is listed here are all attributes and their associated Resource Types in FIM. To create the ‘AccountStatus’ attribute, click on New. As shown below, enter ‘AccountStatus’ with no spaces for the System Name and ‘Account Status’ for the Display Name. The Data Type is going to be ‘Indexed String’. Click Next.           Leave everything on the Localization tab default and click Next.   On the Validation tab as shown below, we will enter the regex expression ^(Enabled|Disabled)?$ with our two desired string values ‘Enabled’ and ‘Disabled’. Click on Finish and then and Submit to complete adding the attribute.       The next step involves associating the attribute with a resource type. This is called ‘Binding’ the attribute. From the Schema Management page, click on All Bindings. From the page that comes up, click on New. As shown below, enter ‘User’ for the Resource Type and ‘Account Status’ for the Attribute Type. This is essentially binding the Account Status attribute to the ‘User’ Resource Type. Click Next.    On the ‘Attribute Override’ tab, type in ‘Account Status’ for the Display Name field. Click Next.   On the ‘Localization’ tab, click Next.   On the ‘Validation’ tab, enter the regex expression ^(Enabled|Disabled)?$ we entered previously for the attribute. Click Finish and then Submit to complete.   Now that the Attribute and the Binding are complete, you have to give users permission to see the attribute on the User Edit page. Go to Administration à Management Policy Rules. Look for the rule named Administration: Administrators can read and update Users and click on it. Once it opens, click on the ‘Target Resources’ tab and look at the section named Resource Attributes. Type in at the end the ‘Account Status’ attribute and check it with the validator. Once done click on OK to save the changes.         Lastly, we need to add the actual dropdown control to the RCDC (Resource Control Display Configuration) for User Editing. Go to Administration à Resource Control Display Configuration. From here navigate until you find the RCDC named Configuration for User Editing RCDC and click on it. The following is what you will see:       First step is to export the Configuration Data file. Click on the Export configuration link and save the file to your desktop of other folder.   Find the file you just exported and open the file in your XML editor of choice. I use notepad but anything will work. Since we are adding a dropdown control, first find another control in the existing file that is already a dropdown in FIM. I used EmployeeType as my example. Copy the control from the beginning tag named <my:Control… to the ending tag </my:Control>. Now take what you copied and paste it in whatever location you desire within the form between two other controls. I chose to place the ‘Account Status’ field after the ‘Account Name’ field. After you paste the control you will need to modify so it looks like this:       Notice where you specify what attribute you are dealing with where it has AccountStatus in the XML. Once you are complete with modifying this, save the file and make sure it is a .xml file.   Now go back to the Configuration for User Editing screen and look at the section named ‘Configuration Data’. Click the ‘Browse’ button and find the XML file you just modified and choose it. Click OK on the bottom of the window and you are done!   Now when you click on a user’s name in the FIM Portal, you should see the newly added dropdown box as below:       Later I will post more about this drop down, specifically on how to automate actually ‘Disabling’ the account in the data source through the FIM Workflows and MAs.   <my:Control my:Name="AccountStatus" my:TypeName="UocDropDownList" my:Caption="{Binding Source=schema, Path=AccountStatus.DisplayName}" my:Description="{Binding Source=schema, Path=AccountStatus.Description}" my:RightsLevel="{Binding Source=rights, Path=AccountStatus}"> <my:Properties> <my:Property my:Name="ValuePath" my:Value="Value"/> <my:Property my:Name="CaptionPath" my:Value="Caption"/> <my:Property my:Name="HintPath" my:Value="Hint"/> <my:Property my:Name="ItemSource" my:Value="{Binding Source=schema, Path=AccountStatus.LocalizedAllowedValues}"/> <my:Property my:Name="SelectedValue" my:Value="{Binding Source=object, Path=AccountStatus, Mode=TwoWay}"/> </my:Properties> </my:Control>

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >