Search Results

Search found 5423 results on 217 pages for 'care industry'.

Page 198/217 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Yet again: "This device can perform faster" (Samsung Galaxy Tab 2)

    - by Mike C
    I've been doing a lot of research with no reasonable solution. Please excuse the length of my post. When I plug my Galaxy Tab 2 (7" / Wi-Fi only / Android ICS) into my Windows 7 64-bit machine, I (almost always) get this warning popup that "This device can perform faster." And in fact, transfers onto the Tab in this mode are slow. The two times I've been able to get a high-speed connection, the transfer has occurred at the expected speed. I just don't know what to do to get that high-speed transfer. (The first time I did, it was the first time I connected the Tab; the second time I did, I was fiddling around and unplugging/plugging in again.) That popup is telling me that the device is USB2, but that it thinks I've connected to a USB1 port. In fact, every USB port (there are ten) on this system is USB2. It's an ASUS M3A78-EMH mobo from late 2008. I'm not sure what the chipset is; the CPU is an AMD Athlon 4850e, but I've seen this message reported for non-AMD systems. (Every mobo reference I've seen in reports on this has been for Asus, but of course most reporters aren't reporting that info at all.) The Windows 7 installation is just a couple weeks old (I had a disk crash) but I saw the same warning on the WinXP/64 that was installed previously. In Device Manager, there are two "Standard Enhanced PCI to USB Host Controller" nodes which are the actual high-speed controllers. There are also five "Standard OpenHCD USB Host Controller" nodes, which I have determined are virtual USB1 controllers embedded in the "Enhanced" controllers. (In Device Manager, I'm using View|Devices by Connection.) My high-speed thumb drives, external disks, and iPod all show up as subnodes of the "Enhanced" controllers; the keyboard, mouse, and USB speakers under the "OpenHCD" ones -- and this is true no matter which ports these devices are plugged into. The Tab shows up under an OpenHCD node, unsurprisingly. It appears as a threesome: a top-level "Mobile USB Composite device" with two subs: "Galaxy Tab 2" and "Mobile USB Modem." (I have no idea what the modem device implies or how I might use it, but I don't care about it either: I just want the Tab to reliably connect at high speed.) On the Tab, the USB support has a switch between PTP and MTP, the latter being the default, and the preferred mode for me (as I'm usually hooking it up for music synch). I have tried, however, connecting it as PTP, and it still connects as USB 1. (As PTP, only the "Galaxy Tab 2" device appears -- no Composite, no Modem.) If it's plugged in as MTP and I change the setting to PTP, Windows unloads and reloads the device, and voila: The Tab appears under an "Enhanced" node, but eventually re-loads again to show a exclamation icon on the device; Properties then shows "This device cannot start." Same response if I plug it in as PTP and then change to MTP; in this case, only the Tab itself shows the exclamation, not the other two devices. One thing I have not tried, and really would prefer to avoid, is installing the "beta" chipset driver available on the Asus website, which is dated 2009. Windows tells me it has the most up-to-date drivers for the Tab, and for the chipset, and I'm inclined to believe that. I suspect the problem is with the Samsung drivers, or possibly the hardware. One suggestion I saw elsewhere which might, possibly, pertain is to ensure the USB cable is properly shielded; however, the Tab has one of those misbegotten 30-pin, not-quite-an-iPod connectors; I don't know if I could find a 3rd party one. It seems unlikely that this cable is improperly shielded, tho. (Is there a way to test that?) So, my question is: does anyone know how to get this working as one might reasonably expect it to?

    Read the article

  • Expert iptables help needed?

    - by Asad Moeen
    After a detailed analysis, I collected these details. I am under a UDP Flood which is more of application dependent. I run a Game-Server and an attacker is flooding me with "getstatus" query which makes the GameServer respond by making the replies to the query which cause output to the attacker's IP as high as 30mb/s and server lag. Here are the packet details, Packet starts with 4 bytes 0xff and then getstatus. Theoretically, the packet is like "\xff\xff\xff\xffgetstatus " Now that I've tried a lot of iptables variations like state and rate-limiting along side but those didn't work. Rate Limit works good but only when the Server is not started. As soon as the server starts, no iptables rule seems to block it. Anyone else got more solutions? someone asked me to contact the provider and get it done at the Network/Router but that looks very odd and I believe they might not do it since that would also affect other clients. Responding to all those answers, I'd say: Firstly, its a VPS so they can't do it for me. Secondly, I don't care if something is coming in but since its application generated so there has to be a OS level solution to block the outgoing packets. At least the outgoing ones must be stopped. Secondly, its not Ddos since just 400kb/s input generates 30mb/s output from my GameServer. That never happens in a D-dos. Asking the provider/hardware level solution should be used in that case but this one is different. And Yes, Banning his IP stops the flood of outgoing packets but he has many more IP-Addresses as he spoofs his original so I just need something to block him automatically. Even tried a lot of Firewalls but as you know they are just front-ends to iptables so if something doesn't work on iptables, what would the firewalls do? These were the rules I tried, iptables -A INPUT -p udp -m state --state NEW -m recent --set --name DDOS --rsource iptables -A INPUT -p udp -m state --state NEW -m recent --update --seconds 1 --hitcount 5 --name DDOS --rsource -j DROP It works for the attacks on un-used ports but when the server is listening and responding to the incoming queries by the attacker, it never works. Okay Tom.H, your rules were working when I modified them somehow like this: iptables -A INPUT -p udp -m length --length 1:1024 -m recent --set --name XXXX --rsource iptables -A INPUT -p udp -m string --string "xxxxxxxxxx" --algo bm --to 65535 -m recent --update --seconds 1 --hitcount 15 --name XXXX --rsource -j DROP They worked for about 3 days very good where the string "xxxxxxxxx" would be rate-limited, blocked if someone flooded and also didn't affect the clients. But just today, I tried updating the chain to try to remove a previously blocked IP so for that I had to flush the chain and restore this rule ( iptables -X and iptables -F ), some clients were already connected to servers including me. So restoring the rules now would also block some of the clients string completely while some are not affected. So does this mean I need to restart the server or why else would this happen because the last time the rules were working, there was no one connected?

    Read the article

  • Intel Rapid Storage / Smart Response SSD caching issue

    - by goober
    Background Recently built my own PC. It works! Almost. It's been a while since getting into the guts of these things, so I'm familiar but may be missing something simple. FYI, I don't care about blowing the OS away -- it's brand new and we can go back from scratch as many times as necessary. Goal / Issue I'd like to use the SSD to take advantage of Intel's Smart Response technology (allows the SSD to act as a cache for HDDs) I would like the SSD cache to act as a cache for my HDDs, which I would like to be in a RAID1 array (so I get the speed from the SSD and the redundancy from the RAID1) However, Windows only sees the drive in device manager (not as a drive), so I'm unsure what to do about that. Related: as far as I know, for this to work, the drives all have to be in a single RAID array (i.e. a RAID0 pairing of the SSD and the RAID1 HDD array). However, when attempting this at the BIOS level, I am told there is not enough space for an array. Steps so Far Moved the SSD onto the Intel controller (I'd had it on the Marvel 6.0 controller instead of the Intel controller, so the BIOS was only seeing it in a strange way) Updated the BIOS of the motherboard to the latest version Reinstalled Intel's RST (iRST?) software several times, as some forums reported it working after reinstalling 3 times (which does not inspire confidence). Checked Intel storage: it does see the SSD as a physical, non-RAID device. However, it says no space exists if I try to create an array. Checked the BIOS: it does not show up in the boot order, but is an option that can be selected under boot options. Tried the firmware update for that model. Issue: the firmware CD doesn't detect a drive; maybe the Intel storage controller is making it difficult? moved the ssd to the marvel controller. The firmware update cd appeared to hang while searching for drives. swapped out the SATA cable for the manufacturer's and moved back to the intel storage controller. Noticed at this point that in the Intel RST software, a device DOES show up in addition to the RAID set -- only shown as a "60 GB internal disk". Windows doesn't appear to see it as a drive, but it does still show in device manager. Move SSD to port from 0-3 on MOBO and set SATA mode to IDE (after disconnecting RAID1 config) to allow the firmware update to work. Firmware was already at the latest version. Next Steps ? Components involved ASUS P8Z68-V PRO motherboard (Intel Z68 Chipset) Intel i7 2600k Processor 2 x 1TB 7200 RPM HDDs 64 GB Crucial M4 SSD (M4-CT064M4SSD2) For Reference -- Storage Configuration Intel 3 gbps Intel 3gbps Intel 6gbps Marvel 6gbps +----------+ +----------+ +----------+ +----------+ | | <----+ | | +-+ | | | |----------| | |----------| |-|--------| |----------| | | | | + | | | | | | +----------+ | +--|-------+ +-|--------+ +----------+ | | | + v v | 1 TB HDD 64 GB SSD + +> 1 TB HDD For Reference -- Intel RST (v10.8.0.1003) Screenshot Don't mind the "rebuilding" -- knocked a power cable out at one point; it's doing its job, not an indicator of a bad HDD. Any thoughts? Thanks in advance for any help!

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. Of interest I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list \Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • Alternative Methods of Sharing Folders in Windows?

    - by Blaenk
    Hey guys. I'm running Windows 7 and as of now I simply share folders as one usually does in Windows. I then have a MacBook with Leopard (Now Snow Leopard) which I use to connect to my computer to mount the shares by going to Finder, then CMD + K and typing smb://BlaenkPC (The name of my PC) into the address box. This consequently connects to my computer and mounts all of the shares. The problem is that sometimes, if for example I close my MacBook (Which makes it go to sleep) or sometimes even without doing that, the connection somehow drops. Sometimes I close the MacBook and upon re-opening it, everything still works; it's random. It still shows the computer as being connected, but it just shows 'loading' indefinitely. If I hit 'eject' with the intention of re-connecting to the computer, it disappears from the sidebar (The Computer Icon) in Finder, but I cannot re-connect. Activity Monitor (or ps aux, whichever) both show hung instances of umount; one for each share that was mounted. I cannot kill these processes with kill or killall (Yes, even with sudo, and sending signal -9). This has happened to me before, and here is another person who has experienced this. My question boils down to this: Is there an alternative method of sharing folders in Windows, that my Mac can read/understand, that is possibly more reliable and preferably just as fast? I usually use the mounted shares to watch television episodes off my computer, or movies, etc. (In other words, I open them in VLC and they automatically stream from my computer). As far as I can tell, this is a problem with the Samba protocol. I have heard of NFS, but I am not sure if I would have to re-format my drives, or what. I don't mind running a service or daemon to allow the sharing of the folders, I just want it to be done and hopefully in a better way than typical Windows shares through Samba. Usually when I encounter this problem, which is often (read: every day), I have no other option but to restart the MacBook. As I stated in the first question I linked to, shutting down and restarting don't work; I have to manually force the shutdown by holding the power button. I have not modified my installation of Mac OS X in any hackish way, so I doubt it's something with the Operating System, but worst come to worst, I might end up reformatting and doing a clean install to see if that fixes anything, as I am at a complete loss as to what may be causing the problem, and no one else seems to have any idea or care, despite there being quite a few people suffering from this problem, as my research has shown. Any pieces of information that can help are extremely appreciated. You don't have to answer every question on here, but maybe even some insight as to why it might not be possible to kill those hung umount instances for example, or why I may not be able to reconnect using samba (Is it something regarding the way the protocol works?). One thing to note is that I have another computer in the home network that doesn't seem to have this problem. However, it is also running Windows 7 (Note though that I am not using the homegroup feature, but the typical windows sharing feature). My only deduction is that the problem is being caused by the way the Mac (Or Samba implementation, whichever) is handling things. Perhaps it is a limitation.

    Read the article

  • How to stop Apache from crashing my entire server?

    - by CyberShadow
    I maintain a Gentoo server with a few services, including Apache. It's fairly low-end (2GB of RAM and a low-end CPU with 2 cores). My problem is that, despite my best efforts, an over-loaded Apache crashes the entire server. In fact, at this point I'm close to being convinced that Linux is a horrible operating system that isn't worth anyone's time looking for stability under load. Things I tried: Adjusting oom_adj for the root Apache process (and thus all its children). That had close to no effect. When Apache was overloaded it would bring the system to a grind, as the system paged out everything else before it got to kill anything. Turning off swap. Didn't help, it would unload memory paged to binaries of processes and other files on /, thus causing the same effect. Putting it in a memory-limited cgroup (limited to 512 MB of RAM, 1/4th of the total). This "worked", at least in my own stress tests - except the server keeps crashing under load (basically stalling all other processes, inaccessible via SSH, etc.) Running it with idle I/O priority. This wasn't a very good idea in the end, because it just caused the system load to climb indefinitely (into the thousands) with almost no visible effect - until you tried to access an unbuffered part of the disk. This caused the task to freeze. (So much for good I/O scheduling, eh?) Limiting the number of concurrent connections to Apache. Setting the number too low caused web sites to become unresponsive due to most slots being occupied with long requests (file downloads). I tried various Apache MPMs without much success (prefork, event, itk). Switching from prefork/event+php-cgi+suphp to itk+mod_php. This improved performance, but didn't solve the actual problem. Switching I/O schedulers (cfq to deadline). Just to stress this out: I don't care if Apache itself goes down under load, I just want the rest of my system to remain stable. Of course, having Apache recover quickly after a brief period of intensive load would be great to have, but one step at a time. Right now I am mostly dumbfounded by how can humanity, in this day and age, design an operating system where such a seemingly simple task (don't allow one system component to crash the entire system) seems practically impossible - or at least, very hard to do. Please don't suggest things like VMs or "BUY MORE RAM". Some more information gathered with a friend's help: The processes hang when the cgroup oom killer is invoked. Here's the call trace: [<ffffffff8104b94b>] ? prepare_to_wait+0x70/0x7b [<ffffffff810a9c73>] mem_cgroup_handle_oom+0xdf/0x180 [<ffffffff810a9559>] ? memcg_oom_wake_function+0x0/0x6d [<ffffffff810aa041>] __mem_cgroup_try_charge+0x32d/0x478 [<ffffffff810aac67>] mem_cgroup_charge_common+0x48/0x73 [<ffffffff81081c98>] ? __lru_cache_add+0x60/0x62 [<ffffffff810aadc3>] mem_cgroup_newpage_charge+0x3b/0x4a [<ffffffff8108ec38>] handle_mm_fault+0x305/0x8cf [<ffffffff813c6276>] ? schedule+0x6ae/0x6fb [<ffffffff8101f568>] do_page_fault+0x214/0x22b [<ffffffff813c7e1f>] page_fault+0x1f/0x30 At this point, the apache memory cgroup is practically deadlocked, and burning CPU in syscalls (all with the above call trace). This seems like a problem in the cgroup implementation...

    Read the article

  • Intermittent lockups, unable to diagnose in over a year

    - by Magsol
    Here's a real doosie; I may just give my firstborn child to whomever helps me solve this problem. In July 2008, I assembled what would be my desktop computer for graduate school. Here are the specs of the machine I built: Thermaltake 750W PSU Corsair Dominator 2x2GB 240-pin SDRAM Thermaltake Tower Asus P5K Deluxe Motherboard Intel Core 2 Quad Q9300 2.5GHz CPU 2 x GeForce 8600 GT WD Caviar Blue 640GB hard drive CD burner DVD burner Soon thereafter, I ordered a new motherboard (because I was an idiot; that first motherboard supported CrossFire, not SLI), an Asus P5N-D. I was originally running Windows XP SP3. Pretty much right into the start of the fall semester, my desktop would simply lock up after awhile. If my system was largely idling, it would be after 1-3 days. If was gaming, it often happened an hour or two into my gaming session, indicating a link to activity level. Here's where it started getting interesting. I started looking at the system temps. The CPU was warmer than it should have been (~60s C), so I purchased some more efficient cooling compound a way better cooler for it. Now it hardly goes over 40 C. Intel was even kind enough to swap it out for free, just to rule it out. Lockups continued. The graphics cards were also running pretty warm: about 60 C idling. Removing one of them seemed to improve stability a little bit...as in, it wouldn't lock up quite as frequently, but still always eventually locked up. But it didn't matter which card I used or removed, the lockups continued. I reverted back to the original motherboard, the P5K Deluxe. Lockups continued. I purchased an entirely new motherboard, eVGA's nForce 750i. Lockups continued. Ran memtest86+ over and over and over, with no errors. Even RMA'd the memory. Lockups continued. Replaced the PSU with a Corsair 750W PSU. Lockups continued. Tried disconnecting all IDE drives (HDDs are SATA). Lockups continued. Replaced both graphics cards with a single Radeon HD 4980. Average temps are now always around 50 C when idling, 60 C only when gaming. Lockups continued. Throughout the whole ordeal, the system has been upgraded from Windows XP SP3 to Vista 32-bit, to Vista 64-bit, and is now at Windows 7 64-bit. Lockups have occurred at every step along the way (each OS was in place for at least a few months before the next upgrade). Edit: By "upgrade" I mean clean install each time. In addition to those reformats, I have performed many, many other reformats of the system and a reinstall of whatever OS had been previously installed in an attempt to rectify this problem, to no avail./Edit When the system locks up, there's no blue screen, no reboot, no error message of any kind. It simply freezes in place until I hit the reset button. Very, very rarely, once Windows boots back up, the system informs me that Windows has recovered from an error, but it can never find the source aside from some piece of hardware. I've swapped out every component in this computer, and there are more fans in it than I care to count...though for the sake of completeness: top 80mm case fan (out) rear 80mm case fan (out) rear 120mm case fan (out) front 120mm case fan (in) side 250mm case fan (in) giant CPU fan on-board motherboard fan (the eVGA board) triple-fan memory setup (came with the memory) PSU internal fan another 120mm fan I stuck on the underside of the video card to keep hot air from collecting at the bottom of the case I'm truly out of ideas. ANY help at all would be oh-so-very GREATLY appreciated. Thank you!

    Read the article

  • Wear and tear on server hard drive from filesystem polling by PHP script

    - by jackie
    So I'm working on a discussion platform, and various clients will visit http://host/thread.php, which will render the discussion thread to date in addition to a form to submit a new post. When a new post is submitted, I would like all of the other clients with browser windows open to have it appear in near-real-time. One of the constraints of my script is that it may not use a DBMS and it must stay in the filesystem. Additionally, I can't use any PECL/PEAR extensions like inotify or anything like that for IPC. The flow will look like this: Client A requests thread.php and the thread is so far empty, but nonetheless it opens a Server-Side Event at eventPusher.php. Client B does the same. Client A fills out a post in the form and and submits (POSTs) it to subHandler.php. ??? (subHandler stores the new submission into the main thread storefile which gets read from when a fresh, new client requests thread.php, in addition to somehow signalling to the continually-running eventPusher event-source that a new comment was posted and that it should echo the event-json to the client. How, exactly, it will send this signal I'm yet unsure of, but there are a few options that I've thought of -- this is the crux of the question, so see below for more clarification) eventPusher.php happily pushes the new event to the client and it shows up soon after it was originally submitted on all clients who have the page open's screens. Now for the #4 missing-link mystery-step, I see a few problems. I mean, either way, eventPusher is gonna be doing a while loop of some sort -- it's gonna be polling something, I think that much is clear. (If that's a bad assumption please do let me know.) Now, the simplest way would be subHandler gets invoked on the form submission, writes it to the main store in addition to newComments.xml, then exits without doing anything else. Then eventPusher checks in newComments.xml every X seconds (by the way, what would be a reasonable time interval here?) and if it finds something then it emits an event to the client. Now, my fear with this is that the server's hard drive will have to constantly start spinning up. Maybe this isn't the case, perhaps it would just get cached in RAM and the linux kernel would take care of this transparently such that filesystem access doesn't actually engage the device because the kernel knows that that particular file hasn't changed since last read. * idea #2: I have no idea how to go about this, but perhaps there is a variable scope that gets stored in general RAM on the system which can be read by any process. Like if we mega-exported a bash variable so that $new_post is normally false but it gets toggled to true by subHandler, and then back to flase once it's pushed to the client. I doubt there's such a variable scope in PHP directly, but I struggle with the concept of variable scope, I just can't seem to understand it no matter what I read on it. * idea #3: eventPusher queries ps in its whileloop for another instance of itself. If there's not another eventPusher active then it's highly unlikely that new comments will be getting submitted. It's okay if this only works =90% of the time, it doesn't need to be completely foolproof. * idea #4: eventPusher queries DMESG to see if that file's been written to recently. So to sum everything up, I need to have inter-php-script-communication in near-real-time that will work on a standard mod_php shared hosting setup without any elevated privileges, PHP addon modules, or other system adjustments that can't be done from the PHP script itself at runtime. With*out* spinning up the drive more than a few times. No SQL servers either. Apologies if my english isn't the best, I'm still trying to improve on it.

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Cannot connect to website - SSL handshaking fails

    - by ravenspoint
    So I cannot connect to certain websites. Just a few, most are OK. The one I really care about is paypal.com. I have done the usual things. Let's see: Checked my etc/hosts Flushed the DNS cache Checked firewall Switched on & off virus protection Switched on and off ad blocking pinged the sites Eventually, I decided to look at what curl is saying in detail == Info: About to connect() to www.paypal.com port 443 (#0) == Info: Trying 66.211.169.2... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 110 bytes (0x6e) 0000: 01 00 00 6a 03 01 4f 6c aa 8c 57 2b 3d 1e 74 64 ...j..Ol..W+=.td 0010: c1 27 25 a5 3a 12 7f 3f 41 0a 17 15 2e c9 67 7c .'%.:.?A.....g| 0020: b3 e1 f6 9a db a9 00 00 2a 00 39 00 38 00 35 00 ........*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 17 00 00 00 13 00 11 00 00 0e ................ 0060: 77 77 77 2e 70 61 79 70 61 6c 2e 63 6f 6d www.paypal.com (hangs here for ever) This looks to me like paypal is refusing to reply to the first SSL handshake. I don't know much about SSL, but compaing to the output from a site that works for me seems to make it obvious == Info: About to connect() to www.cibc.com port 443 (#0) == Info: Trying 159.231.80.200... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 108 bytes (0x6c) 0000: 01 00 00 68 03 01 4f 6c ad 6a 1f 67 d5 84 c4 4b ...h..Ol.j.g...K 0010: 0d 49 ae d6 b9 5b c3 63 f9 48 aa 18 da 43 d1 32 .I...[.c.H...C.2 0020: 47 ae 17 e5 cd e9 00 00 2a 00 39 00 38 00 35 00 G.......*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 15 00 00 00 11 00 0f 00 00 0c ................ 0060: 77 77 77 2e 63 69 62 63 2e 63 6f 6d www.cibc.com == Info: SSLv3, TLS handshake, Server hello (2): <= Recv SSL data, 74 bytes (0x4a) 0000: 02 00 00 46 03 01 00 00 58 cf 26 e2 e1 65 db 11 ...F....X.&..e.. 0010: bc 6f 26 7b 3b 6d eb 14 5f ad 47 dd 86 ea 4d a3 .o&{;m.._.G...M. 0020: fb 9f b7 2a 54 3e 20 5f 6b 04 5a 12 38 64 5d 18 ...*T> _k.Z.8d]. 0030: 65 9e e9 cd 61 eb 91 c1 16 25 61 30 bb 08 2a 78 e...a....%a0..*x 0040: b8 ee b8 7e f2 65 6a 00 04 00 ...~.ej... == Info: SSLv3, TLS handshake, CERT (11): ... and so on - working nicely eventually get some nice HTML Now I am reaaly stuck. This has been going on for five days, so I am pretty sure that the problem is not with paypal. But what on my system could be interfering with the SSL handshaking done by curl with this particular site? I suppose I could not be offering any certificates that PayPal accepts, but wouldn't I get a reply telling me so, or at least giving an error?

    Read the article

  • What seems to be plaguing my hard drives?

    - by Craig
    In a little bit of a tech nightmare here. I know oodles about software, not so much more than the above average user about hardware. I recently had to toss an old desktop of mine. It was gradually getting slower and slower, and after shutting the desktop down for long periods of time, it would choke up upon startup. Sometimes it'd give a disk read error, sometimes say no OS was found, etc. Restarting it about 5-15 times would eventually boot properly. Weird. I also noticed that startup programs were going missing, Dropbox was reindexing my entire folder, and Backblaze was backing up less than the number of files that it should. This lead me to believe it was probably a hard drive issue. I began to wonder why I'd have hard drive issues, and came to closure when I assumed it was because of recent power surges and outages. I'm sure that does a number on the drive. I bought a new desktop recently. It's not a beast or anything, but it's enough for what I do. It's an eMachines (I know, I know) Ultra-Slim (http://www.amazon.com/eMachines-Ultra-Slim-ER1401-57-Desktop-PC/dp/B00475OG9U). This is ideal for me because it's small and portable. It comes with an AC adapter and battery, like a laptop. Just to be safe, I bought an uninterruptable power supply on top of that. It's basically protected completely from any outages that might scramble the drive. I set this up a few days ago and for the past few days I've been perfecting settings, downloading the usual applications, etc. Two days ago, I noticed Dropbox was reindexing my entire Dropbox folder. I installed both Dropbox and Backblaze on this system, but it is very much more lightweight than the other. Only about 15 third-party applications installed. I thought that maybe Dropbox and Backblaze were stressing my system, so I turned off Backblaze. Still, Dropbox runs and comes across this infinite reindex issue. I noticed that upon a reboot recently, two applications did not start on startup either. Also, much like my old desktop, every 3rd or 4th reboot, I'll be forced into a chkdsk. This makes me incredibly nervous. What could possibly be going wrong with my old, years-old desktop that is immediately causing the same issues to my new one? I've considered all of the basics. I'm in a very air conditioned room. I take run routine virus scans. I'd like to think I take care of my systems very well. What is this issue that is haunting me? There's always the possibility that this new desktop has a junk hard drive, but it just seems way too coincidental.

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Nginx Multiple Domains

    - by showFocus
    I am trying to add a second virtual host to nginx. When i go to the new domain it redirects to the old one. I have tried restarting Nginx, rebooting the server. Has anyone come across this before, care to share? File: nginx.conf ### user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } File: ../sites-enabled/domain1.co.uk server { listen 80; server_name www.domain1.co.uk; rewrite ^/(.*) http://domain1.co.uk/$1 permanent; } server { listen 80; server_name domain1.co.uk; access_log /home/me/public_html/domain1.co.uk/log/access.log; error_log /home/me/public_html/domain1.co.uk/log/error.log; location / { root /home/me/public_html/domain1.co.uk/public/; index index.php index.html; # WordPress supercache & permalinks. include /usr/local/nginx/conf/wordpress_params.super_cache; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/me/public_html/domain1.co.uk/public/$fastcgi_script_name; } } File: ../sites-enabled/domain2.co.uk server { listen 80; server_name www.domain2.co.uk; rewrite ^/(.*) http://domain2.co.uk/$1 permanent; } server { listen 80; server_name domain2.co.uk; access_log /home/me/public_html/domain2.co.uk/log/access.log; error_log /home/me/public_html/domain2.co.uk/log/error.log; location / { root /home/me/public_html/domain2.co.uk/public/; index index.php index.html; # Basic version of WordPress parameters, supporting nice permalinks. # include /usr/local/nginx/conf/wordpress_params.regular; # Advanced version of WordPress parameters supporting nice permalinks and WP Super Cache plugin include /usr/local/nginx/conf/wordpress_params.super_cache; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/me/public_html/domain2/public/$fastcgi_script_name; } }

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • Hard drive after PCB swap strange stuff

    - by ramyy
    I’ve done a PCB swap to my HDD. The HDD model is: WD6400AAKS-00A7B2. The original PCB PN matches the new one (first three letter groups), though the cache mismatches (16MB original, 8MB new). The Hardware store that made the swap told me it was hard to do the swap, they have done firmware adaptation. I can see that this firmware version does not match the original, (01.03B01 original, 05.04E05 new). Still I can see that the serial number and model of the drive is correct, the hard drive appeared normal in the BIOS, all the partitions show and everything appears normal. I have encountered three things though, I have left the drive non operated for 2-3 weeks after the swap to avoid corrupting the data or anything else the new PCB might cause, until I buy a new drive and backup the data. I got a drive, and when I powered the old drive manually (I have a laptop, I use a normal desktop power supply and a USB/SATA connector), I heard the motor start and I could hear ticking as if the motor’s somehow struggling to start, and then the motor sound starts again then the ticking, and so on.. I tried powering again it happened again. The third time it started normally and I could see everything normally. I took the chance and copied all the data over to the new drive. When I was done, I powered off the drive (after more than 25 hours of continuous operation), tried to power it up again and it did so normally, and so are the times I powered it up later; but I got very suspicious now. What could be the problem here? And what happened new, it used to power normally after the swap directly? The second thing that happened is that I found size differences with some files; some include movies, songs, (.iso) files for games, and programs. I could find the size is the same, but size on disk is a little more on the new drive for these files. . I’ve tried some of those files (with size differences) they worked fine. They are not too much but still make you suspicious of the integrity of the data copied; one cannot try if all files are working for about (580 GB) worth of data. I tried copying these files on the same partition they exist of the old drive; they are the same in size as when copied to the new drive (allocation unit size not the issue). I took an image of a partition (sector by sector including empty ones) and when I explore it, these file sizes are equal to the original (old drive); I copy them anywhere else their size on disk, increases, i.e becomes equal to the ones I copy from the old drive itself anywhere. Why the size difference and can one trust the integrity of the data?? The third thing is that when I connect my new external USB HDD, the partitions of the old HDD unmount and then mount again. Connected are: (USB mouse + Old HDD) then external HDD. Why that happens?? Considering the following: I compared the SMART reports from after the swap directly and after the copying, no error readings or reallocated sectors where reported. Here they are: http://www.image-share.com/ijpg-1939-219.html I later ran both WD data life guard tests and they came out passed. I’m worried for this drive since I must be sure the data is fine and safe on the new one, and I will consider it backup for the new one, since you can’t trust anything anymore. I hope you can forgive me for the length of the post, but couldn’t ignore any of the details, this hard drive contains very important data to me and I have to deal with the situation with great care.

    Read the article

  • Hoster not fulfilling contract: how to get money back?

    - by plua
    For several years, we have as a small webdesign company rented a dedicated server at a large hosting provider. They had several support levels. When we signed up for this, we had very limited in-house knowledge about server maintenance, and were very worried about the security of our server. We therefore took one of the more expensive support packages. An important aspect in this were these claims: [PROVIDER] verifies the availability of the latest security updates and sends you a notification to see if you are interested to have them installed [PROVIDER] verifies the availability of the latest supported software updates and sends you a notification to see if you are interested to have them installed These items were clearly stated on their website as being part of the advantage of this package.; With not enough knowledge about installing and updating such software on a Linux server, we decided to go for this package. We paid a premium of $50 per month over the maintenance package that is next in line ($100 vs $50). Over the years, we have paid several thousand dollars for this service. Then came the moment that I learned more and more about server management. And I found out step by step that our server was horrendously outdated! We had an OS that was hardly updated, our anti-virus was not working because it needed certain more recent packages on the OS, and in general there were a whole bunch of security vulnerabilities and fixes that were lacking. Shocked, I wrote the provider. Turns out, they decided unilaterally that they would not send out any notifications to clients because clients would get too many e-mails. This is a quote from their explanation: [...] We have decided not to spam its clients with OS and security updates and only install them whenever asked by the client I was shocked! They had never mentioned that they would drop this service, and in fact the claims about updating their clients through e-mail was still on their website, after they apparently stopped doing this years ago! Upon finding this out, I requested they refund all that we have paid as a premium over the other package, and make it available as future credit with their own company. I thought this was a very reasonable request. However, they said they would only go back one year and provide credit for this one year. Mails went back and forth, but they were not willing to give credit for the whole period, which I felt I was entitled to. So ultimately I left the hosting company, and filed a complaint with the BBB a while ago. Now, I am not the kind of person who runs to a lawyer for any minor thing, but in this case I am really considering taking action. I have been paying for years for a service I did not receive (the premium package had a few other pluses, but we took it primarily for these two points, and I can prove that we did not use the other benefits). For our small company the hosting costs were a very large part of our budget, and I feel it is very unfair how this large provider just does not care about not fulfilling its obligations. So my question is: what action should I take? Is a lawyer the only next step, or are there other suggestions? And am I right here to claim this money, or are they right that there is some sort of statue of limitations on such claims? Any feedback is appreciated.

    Read the article

  • Deploying MVC2 application to IIS7.5 - Ninject asked to provide controllers for content files

    - by Rune Jacobsen
    I have an application that started life as an MVC (1.0) app in Visual Studio 2008 Sp1 with a bunch of Silverlight 3 projects as part of the site. Nothing fancy at all. Using Ninject for dependency injection (first version 2 beta, now the released version 2 with the MVC extensions). With the release of .Net 4.0, VS2010, MVC2 etc., we decided to move the application to the newest platform. The conversion wizard in VS2010 apparently took care of everything, with one exception - it didn't change references to mvc1 to now point to mvc2, so I had to do that manually. Of course, this makes me think about other MVC2 things that could be missing from my app, that would be there if I did File - New Project... But that is not the focus of this question. When I deploy this application to the IIS 7.5 server (running on Win2008 R2 x64), the application as such works. However, images, scripts and other static content doesn't seem to exist. Of course they are there on disk on the server, but they don't show up in the client web browser. I am fairly new to IIS, so the only trick I knew is to try to open the web page in a browser on the server, as that could give me more information. And here, finally, we meet our enemy. If I try to go directly to the URL of one of the images (http://server/Content/someimage.jpg for instance), I get the following error in the browser: The IControllerFactory 'Ninject.Web.Mvc.NinjectControllerFactory' did not return a controller for a controller named 'Content'. Aha. The web server tries to feed this request to MVC, who with its' default routing setup assumes Content to be a controller, and fails. How can I get it to treat Content/ and Scripts/ (among others) as non-controllers and just pass through the static content? This of course works with Cassini on my developer machine, but as soon as I deploy, this problem hits. I am using the last version of Ninject MVC 2 where the IoC tool should pass missing controllers to the base controller factory, but this has apparently not helped. I have also tried to add ignore routes for Content etc., but this apparently has no effect either. I am not even sure I am addressing the problem on the right level. Does anyone know where to look to get this app going? I have full control of the web server so I can more or less do whatever I want to it, as long as it starts working. Thanks!

    Read the article

  • WPF CommandParameter is NULL first time CanExecute is called

    - by Jonas Follesø
    I have run into an issue with WPF and Commands that are bound to a Button inside the DataTemplate of an ItemsControl. The scenario is quite straight forward. The ItemsControl is bound to a list of objects, and I want to be able to remove each object in the list by clicking a Button. The Button executes a Command, and the Command takes care of the deletion. The CommandParameter is bound to the Object I want to delete. That way I know what the user clicked. A user should only be able to delete their "own" objects - so I need to do some checks in the "CanExecute" call of the Command to verify that the user has the right permissions. The problem is that the parameter passed to CanExecute is NULL the first time it's called - so I can't run the logic to enable/disable the command. However, if I make it allways enabled, and then click the button to execute the command, the CommandParameter is passed in correctly. So that means that the binding against the CommandParameter is working. The XAML for the ItemsControl and the DataTemplate looks like this: <ItemsControl x:Name="commentsList" ItemsSource="{Binding Path=SharedDataItemPM.Comments}" Width="Auto" Height="Auto"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <Button Content="Delete" FontSize="10" Command="{Binding Path=DataContext.DeleteCommentCommand, ElementName=commentsList}" CommandParameter="{Binding}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> So as you can see I have a list of Comments objects. I want the CommandParameter of the DeleteCommentCommand to be bound to the Command object. So I guess my question is: have anyone experienced this problem before? CanExecute gets called on my Command, but the parameter is always NULL the first time - why is that? Update: I was able to narrow the problem down a little. I added an empty Debug ValueConverter so that I could output a message when the CommandParameter is data bound. Turns out the problem is that the CanExecute method is executed before the CommandParameter is bound to the button. I have tried to set the CommandParameter before the Command (like suggested) - but it still doesn't work. Any tips on how to control it. Update2: Is there any way to detect when the binding is "done", so that I can force re-evaluation of the command? Also - is it a problem that I have multiple Buttons (one for each item in the ItemsControl) that bind to the same instance of a Command-object? Update3: I have uploaded a reproduction of the bug to my SkyDrive: http://cid-1a08c11c407c0d8e.skydrive.live.com/self.aspx/Code%20samples/CommandParameterBinding.zip

    Read the article

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • GZip compression with WCF hosted on IIS7

    - by joniba
    So I'm going to add my query to the small ocean of questions on the subject. I'm trying to enable GZip compression on large soap responses from a WCF service. So far, I've followed instructions here and in a variety of other places to enable dynamic compression on IIS. Here's my dynamicTypes section from the applicationHost.config: <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/xop+xml" enabled="true" /> <add mimeType="application/soap+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> And also: <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> Though I'm not so clear on why that's needed. Threw some extra mime-types in there just in case. I've implemented IClientMessageInspector to add Accept-Encoding: gzip, deflate to my client's HttpRequests. Here's an example of a request-header taken from fiddler: POST http://[omitted]/TestMtomService/TextService.svc HTTP/1.1 Content-Type: application/soap+xml; charset=utf-8 Accept-Encoding: gzip, deflate Host: [omitted] Content-Length: 542 Expect: 100-continue Now, this doesn't work. There's simply no compression happening, no matter what the size of the message (tried up to 1.5Mb). I've looked at this post, but have not run into an exception as he describes, so I haven't tried the CodeProject implementation that he proposes. Also I've seen a lot of other implementations that are supposed to get this to work, but cannot make sense of them (e.g., msdn's GZip encoder). Why would I need to implement the encoder, or the code-project solution? Shouldn't IIS take care of the compression? So what else do I need to do to get this to work? Joni

    Read the article

  • Detect user logout / shutdown in Python / GTK under Linux - SIGTERM/HUP not received

    - by Ivo Wetzel
    OK this is presumably a hard one, I've got an pyGTK application that has random crashes due to X Window errors that I can't catch/control. So I created a wrapper that restarts the app as soon as it detects a crash, now comes the problem, when the user logs out or shuts down the system, the app exits with status 1. But on some X errors it does so too. So I tried literally anything to catch the shutdown/logout, with no success, here's what I've tried: import pygtk import gtk import sys class Test(gtk.Window): def delete_event(self, widget, event, data=None): open("delete_event", "wb") def destroy_event(self, widget, data=None): open("destroy_event", "wb") def destroy_event2(self, widget, event, data=None): open("destroy_event2", "wb") def __init__(self): gtk.Window.__init__(self, gtk.WINDOW_TOPLEVEL) self.show() self.connect("delete_event", self.delete_event) self.connect("destroy", self.destroy_event) self.connect("destroy-event", self.destroy_event2) def foo(): open("add_event", "wb") def ex(): open("sys_event", "wb") from signal import * def clean(sig): f = open("sig_event", "wb") f.write(str(sig)) f.close() exit(0) for sig in (SIGABRT, SIGILL, SIGINT, SIGSEGV, SIGTERM): signal(sig, lambda *args: clean(sig)) def at(): open("at_event", "wb") import atexit atexit.register(at) f = Test() sys.exitfunc = ex gtk.quit_add(gtk.main_level(), foo) gtk.main() open("exit_event", "wb") Not one of these succeeds, is there any low level way to detect the system shutdown? Google didn't find anything related to that. I guess there must be a way, am I right? :/ EDIT: OK, more stuff. I've created this shell script: #!/bin/bash trap test_term TERM trap test_hup HUP test_term(){ echo "teeeeeeeeeerm" >~/Desktop/term.info exit 0 } test_hup(){ echo "huuuuuuuuuuup" >~/Desktop/hup.info exit 1 } while [ true ] do echo "idle..." sleep 2 done And also created a .desktop file to run it: [Desktop Entry] Name=Kittens GenericName=Kittens Comment=Kitten Script Exec=kittens StartupNotify=true Terminal=false Encoding=UTF-8 Type=Application Categories=Network;GTK; Name[de_DE]=Kittens Normally this should create the term file on logout and the hup file when it has been started with &. But not on my System. GDM doesn't care about the script at all, when I relog, it's still running. I've also tried using shopt -s huponexit, with no success. EDIT2: Also here's some more information aboute the real code, the whole thing looks like this: Wrapper Script, that catches errors and restarts the programm -> Main Programm with GTK Mainloop -> Background Updater Thread The flow is like this: Start Wrapper -> enter restart loop while restarts < max: -> start program -> check return code -> write error to file or exit the wrapper on 0 Now on shutdown, start program return 1. That means either it did hanup or the parent process terminated, the main problem is to figure out which of these two did just happen. X Errors result in a 1 too. Trapping in the shellscript doesn't work. If you want to take a look at the actual code check it out over at GitHub: http://github.com/BonsaiDen/Atarashii

    Read the article

  • need help with jquery selectors

    - by photographer
    I've got code like that: <ul class="gallery_demo_unstyled"> <li class="active"><img src='001.jpg' /></li> <li><img src='002.jpg' /></li> <li><img src='003.jpg' /></li> <li><img src='004.jpg' /></li> <li><img src='005.jpg' /></li> <li><img src='006.jpg' /></li> </ul> <div class="Paginator"> <a href="../2/" class="Prev">&lt;&lt;</a> <a href="../1/">1</a> <a href="../2/">2</a> <span class="this-page">3</span> <a href="../4/">4</a> <a href="../5/">5</a> <a href="../4/" class="Next">&gt;&gt;</a> </div> <div class="Albums"><div class="AlbumsMenu"> <p><b>ALBUMS</b></p> <p><a href="../../blackandwhite/1/" >blackandwhite</a></p> <p><a href="../../color/1/" class='this-page'>>>color</a></p> <p><a href="../../film/1/" >film</a></p> <p><a href="../../digital/1/" >digital</a></p> <p><a href="../../portraits/1/" >portraits</a></p> </div></div> ...and some JavaScript/jQuery allowing to cycle through the images (the very top li elements) going back to the first image after the last one: $$.nextSelector = function(selector) { return $(selector).is(':last-child') ? $(selector).siblings(':first-child') : $(selector).next(); }; Current page is always 'this-page' class (span or p in my case, but I could change that if necessary). The question: what should I change in my code to make it go after the last image to the next page instead of cycling through the page over and over again, and after the last page to the next album? And to the first image on the first page of the first album after the last-last-last (or just stop there — don't really care)?

    Read the article

  • Issue with dynamic array Queue data structure with void pointer

    - by Nazgulled
    Hi, Maybe there's no way to solve this the way I'd like it but I don't know everything so I better ask... I've implemented a simple Queue with a dynamic array so the user can initialize with whatever number of items it wants. I'm also trying to use a void pointer as to allow any data type, but that's the problem. Here's my code: typedef void * QueueValue; typedef struct sQueueItem { QueueValue value; } QueueItem; typedef struct sQueue { QueueItem *items; int first; int last; int size; int count; } Queue; void queueInitialize(Queue **queue, size_t size) { *queue = xmalloc(sizeof(Queue)); QueueItem *items = xmalloc(sizeof(QueueItem) * size); (*queue)->items = items; (*queue)->first = 0; (*queue)->last = 0; (*queue)->size = size; (*queue)->count = 0; } Bool queuePush(Queue * const queue, QueueValue value, size_t val_sz) { if(isNull(queue) || isFull(queue)) return FALSE; queue->items[queue->last].value = xmalloc(val_sz); memcpy(queue->items[queue->last].value, value, val_sz); queue->last = (queue->last+1) % queue->size; queue->count += 1; return TRUE; } Bool queuePop(Queue * const queue, QueueValue *value) { if(isEmpty(queue)) return FALSE; *value = queue->items[queue->first].value; free(queue->items[queue->first].value); queue->first = (queue->first+1) % queue->size; queue->count -= 1; return TRUE; } The problem lies on the queuePop function. When I call it, I lose the value because I free it right away. I can't seem to solve this dilemma. I want my library to be generic and modular. The user should not care about allocating and freeing memory, that's the library's job. How can the user still get the value from queuePop and let the library handle all memory allocs/frees?

    Read the article

  • Properly clean up excel interop objects revisited: Wrapper objects

    - by chiccodoro
    Hi all, Excel 2007 Hangs When Closing via .NET How to properly clean up Excel interop objects in C# How to properly clean up interop objects in C# All of these struggle with the problem that C# does not release the Excel COM objects properly after using them. There are mainly two directions of working around this issue: Kill the Excel process when Excel is not used anymore. Take care to assign each COM object used explicitly to a variable and to Marshal.ReleaseComObject all of these. Some have stated that 2 is too tedious and there is always some uncertainty whether you forget to stick to this rule at some places in the code. Still 1 seems dirty and dangerous to me, also I could imagine that in an environment with restricted access killing processes is not allowed. So I've been thinking about solving 2 by creating another proxy object model which mimics the Excel object model (for me, it would suffice to implement the objects I actually need). The principle would look as follows: Each Excel Interop class has its proxy which wraps an object of that class. The proxy releases the COM object in its destructor. The proxy mimics the interface of the Interop class (maybe by inheriting it). Any methods that usually return another COM object return a proxy instead. The other methods simply delegate the implementation to the inner COM object. This is a rough sketch of the code: public class Application : Microsoft.Office.Interop.Excel.Application { private Microsoft.Office.Interop.Excel.Application innerApplication = new Microsoft.Office.Interop.Excel.Application innerApplication(); ~Application() { Marshal.ReleaseCOMObject(innerApplication); } public Workbooks Workbooks { get { return new Workbooks(innerApplication.Workbooks); } } } public class Workbooks { private Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks; Workbooks(Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks) { this.innerWorkbooks = innerWorkbooks; } ~Workbooks() { Marshal.ReleaseCOMObject(innerWorkbooks); } } My questions to you are in particular: Who finds this a bad idea and why? Who finds this a gread idea? If so, why hasn't anybody implemented/published such a model yet? Just due to the effort, or am I missing a killing problem with that idea? Is it impossible/bad/dangerous to do the ReleaseCOMObject in the destructor? (I've only seen proposals to put it in a Dispose() rather than in a destructor - why?) If the approach makes sense, any suggestions to improve it?

    Read the article

  • RuntimeBinderException with dynamic in C# 4.0

    - by Terence Lewis
    I have an interface: public abstract class Authorizer<T> where T : RequiresAuthorization { public AuthorizationStatus Authorize(T record) { // Perform authorization specific stuff // and then hand off to an abstract method to handle T-specific stuff // that should happen when authorization is successful } } Then, I have a bunch of different classes which all implement RequiresAuthorization, and correspondingly, an Authorizer<T> for each of them (each business object in my domain requires different logic to execute once the record has been authorized). I'm also using a UnityContainer, in which I register various Authorizer<T>'s. I then have some code as follows to find the right record out of the database and authorize it: void Authorize(RequiresAuthorization item) { var dbItem = ChildContainer.Resolve<IAuthorizationRepository>() .RetrieveRequiresAuthorizationById(item.Id); var authorizerType = type.GetType(String.Format("Foo.Authorizer`1[[{0}]], Foo", dbItem.GetType().AssemblyQualifiedName)); dynamic authorizer = ChildContainer.Resolve(type) as dynamic; authorizer.Authorize(dbItem); } Basically, I'm using the Id on the object to retrieve it out of the database. In the background NHibernate takes care of figuring out what type of RequiresAuthorization it is. I then want to find the right Authorizer for it (I don't know at compile time what implementation of Authorizer<T> I need, so I've got a little bit of reflection to get the fully qualified type). To accomplish this, I use the non-generic overload of UnityContainer's Resolve method to look up the correct authorizer from configuration. Finally, I want to call Authorize on the authorizer, passing through the object I've gotten back from NHibernate. Now, for the problem: In Beta2 of VS2010 the above code works perfectly. On RC and RTM, as soon as I make the Authorize() call, I get a RuntimeBinderException saying "The best overloaded method match for 'Foo.Authorizer<Bar>.Authorize(Bar)' has some invalid arguments". When I inspect the authorizer in the debugger, it's the correct type. When I call GetType().GetMethods() on it, I can see the Authorize method which takes a Bar. If I do GetType() on dbItem it is a Bar. Because this worked in Beta2 and not in RC, I assumed it was a regression (it seems like it should work) and I delayed sorting it out until after I'd had a chance to test it on the RTM version of C# 4.0. Now I've done that and the problem still persists. Does anybody have any suggestions to make this work? Thanks Terence

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >