Search Results

Search found 4938 results on 198 pages for 'unix timestamp'.

Page 185/198 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • apcupsd on Linux does not report on APC BackUPS Pro 900

    - by lserni
    From what documentation I could find, the UPS should be (is!) supported by Linux and ought to work with apcupsd. I looked for specific problems such as the infamous Microlink protocol, and found none. I have found a feedback from a guy in UK that reports using this very model on a not-too-different OS version (his OpenSuSE 12.1, mine 12.3 x86_64). The USB port is detected, lsusb reports Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply and lsusb -v -s002:003 confirms and expands: Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x051d American Power Conversion idProduct 0x0002 Uninterruptible Power Supply bcdDevice 0.90 iManufacturer 1 American Power Conversion iProduct 2 Back-UPS RS 900G FW:879.L4 .I USB FW:L4 bNumConfigurations 1 Configuration Descriptor: [...] Interface Descriptor: [...] bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 33 US bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 1134 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 100 Device Status: 0x0000 (Bus Powered) The kernel recognizes this and duly sets up crw------- 1 root root 180, 96 Nov 4 16:11 /dev/usb/hiddev0 As far as I know, everything is as it should be. I have put the standard configuration in /etc/apcupsd/apcupsd.conf (which is Unix-terminated, ASCII-only, no BOM (just in case)) UPSCABLE usb UPSTYPE usb DEVICE (I have also tried commenting out DEVICE, and setting a device of /dev/puppa results in an access attempt to /dev/puppa, not some /var/lib/dev/puppa or /dev/puppa\r\n). Yet, what apcaccess tells me is VERSION : 3.14.10 (13 September 2011) suse CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2013-11-04 16:24:22 +0100 MODEL : STATUS : NOBATT LINEV : 000.0 Volts LOADPCT : 0.0 Percent Load Capacity BCHARGE : 000.0 Percent TIMELEFT : 0.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds SENSE : Low LOTRANS : 000.0 Volts HITRANS : 000.0 Volts It doesn't recognize the model, and reports no battery (and no voltage). This confirms that it's not the Microlink problem, or it would report the battery status, if precious little else. If I disconnect the USB cable, I get an apcupsd message to the effect that communications have been lost; and I get the "communication restored" broadcast too, if I reconnect the cable. apcupsd is monitoring. So everything tells me that it should work -- only it doesn't. Does anyone spot what I'm missing?

    Read the article

  • Error processing Spree sample images - file not recognized by identify command in paperclip geometry.rb:29

    - by purpletonic
    I'm getting an error when I run the Spree sample data. It occurs when Spree tries to load in the product data, specifically the product images. Here's the error I'm getting: * Execute db:load_file loading ruby <GEM DIR>/sample/lib/tasks/../../db/sample/spree/products.rb -- Processing image: ror_tote.jpeg rake aborted! /var/folders/91/63kgbtds2czgp0skw3f8190r0000gn/T/ror_tote.jpeg20121007-21549-2rktq1 is not recognized by the 'identify' command. <GEM DIR>/paperclip-2.7.1/lib/paperclip/geometry.rb:31:in `from_file' <GEM DIR>/spree/core/app/models/spree/image.rb:35:in `find_dimensions' I've made sure ImageMagick is installed correctly, as previously I was having problems with it. Here's the output I'm getting when running the identify command directly. $ identify Version: ImageMagick 6.7.7-6 2012-10-06 Q16 http://www.imagemagick.org Copyright: Copyright (C) 1999-2012 ImageMagick Studio LLC Features: OpenCL ... other usage info omitted ... I also used pry with the pry-debugger and put a breakpoint in geometry.rb inside of Paperclip. Here's what that section of geometry.rb looks like: # Uses ImageMagick to determing the dimensions of a file, passed in as either a # File or path. # NOTE: (race cond) Do not reassign the 'file' variable inside this method as it is likely to be # a Tempfile object, which would be eligible for file deletion when no longer referenced. def self.from_file file file_path = file.respond_to?(:path) ? file.path : file raise(Errors::NotIdentifiedByImageMagickError.new("Cannot find the geometry of a file with a blank name")) if file_path.blank? geometry = begin silence_stream(STDERR) do binding.pry Paperclip.run("identify", "-format %wx%h :file", :file => "#{file_path}[0]") end rescue Cocaine::ExitStatusError "" rescue Cocaine::CommandNotFoundError => e raise Errors::CommandNotFoundError.new("Could not run the `identify` command. Please install ImageMagick.") end parse(geometry) || raise(Errors::NotIdentifiedByImageMagickError.new("#{file_path} is not recognized by the 'identify' command.")) end At the point of my binding.pry statement, the file_path variable is set to the following: file_path => "/var/folders/91/63kgbtds2czgp0skw3f8190r0000gn/T/ror_tote.jpeg20121007-22732-1ctl1g1" I've also double checked that this exists, by opening my finder in this directory, and opened it with preview app; and also that the program can run identify by running %x{identify} in pry, and I receive the same version Version: ImageMagick 6.7.7-6 2012-10-06 Q16 as before. Removing the additional digits (is this a timestamp?) after the file extension and running the Paperclip.run command manually in Pry gives me a different error: Cocaine::ExitStatusError: Command 'identify -format %wx%h :file' returned 1. Expected 0 I've also tried manually updating the Paperclip gem in Spree to 3.0.2 and still get the same error. So, I'm not really sure what else to try. Is there still something incorrect with my ImageMagick setup?

    Read the article

  • Grails validateable not work for non-persistent domain class

    - by Hoàng Long
    I followed the instruction here: http://www.grails.org/doc/latest/guide/7.%20Validation.html and added into config.groovy: grails.validateable.classes = [liningtest.Warm'] Then added in src/groovy/Warm.groovy (it's a non-persistent domain class): package liningtest import org.codehaus.groovy.grails.validation.Validateable class Warm { String name; int happyCite; Warm(String n, int h) { this.name = n; this.happyCite = h; } static constraints = { name(size: 1..50) happyCite(min: 100) } } But it just doesn't work (both "blank false" & "size: 0..25") for the "hasErrors" function. It always returns false, even when the name is 25. Is this a Grails bug, if yes, is there any work-around? I'm using Grails 1.3.3 UPDATE: I have updated the simplified code. And now I know that constraint "size" can't be used with "blank", but still does not work. My test class in test/unit/liningtest/WarmTests.groovy package liningtest import grails.test.* class WarmTests extends GrailsUnitTestCase { protected void setUp() { super.setUp() } protected void tearDown() { super.tearDown() } void testSomething() { def w = new Warm('Hihi', 3) assert (w.happyCite == 3) assert (w.hasErrors() == true) } } And the error I got: <?xml version="1.0" encoding="UTF-8" ?> <testsuite errors="1" failures="0" hostname="evolus-50b0002c" name="liningtest.WarmTests" tests="1" time="0.062" timestamp="2010-12-16T04:07:47"> <properties /> <testcase classname="liningtest.WarmTests" name="testSomething" time="0.062"> <error message="No signature of method: liningtest.Warm.hasErrors() is applicable for argument types: () values: [] Possible solutions: hashCode()" type="groovy.lang.MissingMethodException">groovy.lang.MissingMethodException: No signature of method: liningtest.Warm.hasErrors() is applicable for argument types: () values: [] Possible solutions: hashCode() at liningtest.WarmTests.testSomething(WarmTests.groovy:18) </error> </testcase> <system-out><![CDATA[--Output from testSomething-- ]]></system-out> <system-err><![CDATA[--Output from testSomething-- ]]></system-err> </testsuite> UPDATE 2: When I don't use Unit test, but try to call hasErrors in the controller, it runs but return false value. (hasErrors return false with Warm('Hihi', 3) ). Does anyone has a clue?

    Read the article

  • problem with custom NSProtocol and caching on iPhone

    - by TomSwift
    My iPhone app embeds a UIWebView which loads html via a custom NSProtocol handler I have registered. My problem is that resources referenced in the returned html, which are also loaded via my custom protocol handler, are cached and never reloaded. In particular, my stylesheet is cached: <link rel="stylesheet" type="text/css" href="./styles.css" /> The initial request to load the html in the UIWebView looks like this: NSString* strUrl = [NSMutableString stringWithFormat: @"myprotocol:///entry?id=%d", entryID ]; NSURL* url = [NSURL URLWithString: strUrl]; [_pCurrentWebView loadRequest: [NSURLRequest requestWithURL: url cachePolicy: NSURLRequestReloadIgnoringLocalCacheData timeoutInterval: 60 ]]; (note the cache policy is set to ignore, and I've verified this cache policy carries through to subsequent requests for page resources on the initial load) The protocol handler loads the html from a database and returns it to the client using code like this: // create the response record NSURLResponse *response = [[NSURLResponse alloc] initWithURL: [request URL] MIMEType: mimeType expectedContentLength: -1 textEncodingName: textEncodingName]; // get a reference to the client so we can hand off the data id client = [self client]; // turn off caching for this response data [client URLProtocol: self didReceiveResponse:response cacheStoragePolicy: NSURLCacheStorageNotAllowed]; // set the data in the response to our jfif data [client URLProtocol: self didLoadData:data]; [data release]; (Note the response cache policy is "not allowed"). Any ideas how I can make it NOT cache my styles.css resource? I need to be able to dynamically alter the content of this resource on subsequent loads of html that references this file. I thought clearing the shared url cache would work, but it doesnt: [[NSURLCache sharedURLCache] removeAllCachedResponses]; One thing that does work, but it's terribly inefficient, is to dynamically cache-bust the url for the stylesheet by adding a timestamp parameter: <link rel="stylesheet" type="text/css" href="./styles.css?ts=1234567890" /> To make this work I have to load my html from the db, search and replace the url for the stylesheet with a cache-busting parameter that changes on each request. I'd rather not do this. My presumption is that there is no problem if I were to load my content via the built-in HTTP protocol. In that case, I'm guessing that the UIWebView looks at any Cache-Control flags in the NSURLHTTPResponse object's http headers and abides by them. Since my NSURLResponseObject has no http headers (it's not http...) then perhaps UIWebView just decides to cached the resource (ignoring the NSURLRequest caching directive?). Ideas???

    Read the article

  • WordPress: Display Online Users' Avatars

    - by Wade D Ouellet
    Hi, I'm using version 2.7.0 of this WordPress plugin to display which users are currently online (the latest version doesn't work): http://wordpress.org/extend/plugins/wp-useronline/ It's working great but I would love to be able to alter it quickly to display the users' avatars instead of their names. Hoping someone with pretty good knowledge of WordPress queries and functions can help. The part below seems to be the part that handles all this. If this isn't enough, here is the link to download the version I am using with the full php files: http://downloads.wordpress.org/plugin/wp-useronline.2.70.zip // If No Bot Is Found, Then We Check Members And Guests if ( !$bot_found ) { if ( $current_user->ID ) { // Check For Member $user_id = $current_user->ID; $user_name = $current_user->display_name; $user_type = 'member'; $where = $wpdb->prepare("WHERE user_id = %d", $user_id); } elseif ( !empty($_COOKIE['comment_author_'.COOKIEHASH]) ) { // Check For Comment Author (Guest) $user_id = 0; $user_name = trim(strip_tags($_COOKIE['comment_author_'.COOKIEHASH])); $user_type = 'guest'; } else { // Check For Guest $user_id = 0; $user_name = __('Guest', 'wp-useronline'); $user_type = 'guest'; } } // Check For Page Title if ( is_admin() && function_exists('get_admin_page_title') ) { $page_title = ' &raquo; ' . __('Admin', 'wp-useronline') . ' &raquo; ' . get_admin_page_title(); } else { $page_title = wp_title('&raquo;', false); if ( empty($page_title) ) $page_title = ' &raquo; ' . strip_tags($_SERVER['REQUEST_URI']); elseif ( is_singular() ) $page_title = ' &raquo; ' . __('Archive', 'wp-useronline') . ' ' . $page_title; } $page_title = get_bloginfo('name') . $page_title; // Delete Users $delete_users = $wpdb->query($wpdb->prepare(" DELETE FROM $wpdb->useronline $where OR timestamp < CURRENT_TIMESTAMP - %d ", self::$options->timeout)); // Insert Users $data = compact('user_type', 'user_id', 'user_name', 'user_ip', 'user_agent', 'page_title', 'page_url', 'referral'); $data = stripslashes_deep($data); $insert_user = $wpdb->insert($wpdb->useronline, $data); // Count Users Online self::$useronline = intval($wpdb->get_var("SELECT COUNT(*) FROM $wpdb->useronline"));

    Read the article

  • How to split xml to header and items using smooks?

    - by palto
    I have a xml file roughly like this: <batch> <header> <headerStuff /> </header> <contents> <timestamp /> <invoices> <invoice> <invoiceStuff /> </invoice> <!-- Insert 1000 invoice elements here --> </invoices> </contents> </batch> I would like to split that file to 1000 files with the same headerStuff and only one invoice. Smooks documentation is very proud of the possibilities of transformations, but unfortunately I don't want to do those. The only way I've figured how to do this is to repeat the whole structure in freemarker. But that feels like repeating the structure unnecessarily. The header has like 30 different tags so there would be lots of work involved also. What I currently have is this: <?xml version="1.0" encoding="UTF-8"?> <smooks-resource-list xmlns="http://www.milyn.org/xsd/smooks-1.1.xsd" xmlns:calc="http://www.milyn.org/xsd/smooks/calc-1.1.xsd" xmlns:frag="http://www.milyn.org/xsd/smooks/fragment-routing-1.2.xsd" xmlns:file="http://www.milyn.org/xsd/smooks/file-routing-1.1.xsd"> <params> <param name="stream.filter.type">SAX</param> </params> <frag:serialize fragment="INVOICE" bindTo="invoiceBean" /> <calc:counter countOnElement="INVOICE" beanId="split_calc" start="1" /> <file:outputStream openOnElement="INVOICE" resourceName="invoiceSplitStream"> <file:fileNamePattern>invoice-${split_calc}.xml</file:fileNamePattern> <file:destinationDirectoryPattern>target/invoices</file:destinationDirectoryPattern> <file:highWaterMark mark="10"/> </file:outputStream> <resource-config selector="INVOICE"> <resource>org.milyn.routing.io.OutputStreamRouter</resource> <param name="beanId">invoiceBean</param> <param name="resourceName">invoiceSplitStream</param> <param name="visitAfter">true</param> </resource-config> </smooks-resource-list> That creates files for each invoice tag, but I don't know how to continue from there to get the header also in the file. EDIT: The solution has to use Smooks. We use it in an application as a generic splitter and just create different smooks configuration files for different types of input files.

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David Boike
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • Help debugging Sendmail/Mailman configuration issue

    - by inxilpro
    Hi folks, I'm trying to configure a server with Sendmail and Mailman. I've been getting "Broken pipe" errors for a while, and have slowly been debugging. I fixed some permission issues, and changed the user that Mailman expects to be called from, among other things. Finally, I'd gone through everything I could think of, so I added a new test to see if it's the Mailman script or Sendmail that's causing the problem. Here's the error I'm getting now (stripped of timestamps and identifying information): <-- MAIL FROM:[email protected] Authentication-Warning: xxxxx.org: xxxxxxxxxxxxxx.net [xx.xx.xxx.xxx] didn't use HELO protocol --- 250 2.1.0 [email protected]... Sender ok <-- RCPT TO: [email protected] --- 250 2.1.5 [email protected]... Recipient ok <-- DATA --- 354 Enter mail, end with "." on a line by itself [email protected], size=20, class=0, nrcpts=1, msgid=<[email protected]>, proto=SMTP, relay=xxxxxxxxxxxxxx.net [xx.xx.xxx.xxx] --- 250 2.0.0 o6KMg2xZ025804 Message accepted for delivery alias [email protected] => "|/bin/echo foo" SYSERR(root): putbody: write error: Broken pipe 0: fl=0x0, mode=20660: CHR: dev=0/15, ino=776, nlink=1, u/gid=0/0, size=0 1: fl=0x1, mode=20660: CHR: dev=0/15, ino=776, nlink=1, u/gid=0/0, size=0 2: fl=0x1, mode=20660: CHR: dev=0/15, ino=776, nlink=1, u/gid=0/0, size=0 3: fl=0x2, mode=140777: SOCK localhost->[[UNIX: /dev/log]] 5: fl=0x0, mode=100600: dev=8/3, ino=486765, nlink=1, u/gid=0/51, size=5 6: fl=0x8000, mode=100640: dev=8/3, ino=65501, nlink=1, u/gid=0/0, size=12288 7: fl=0x8000, mode=100640: dev=8/3, ino=65501, nlink=1, u/gid=0/0, size=12288 8: fl=0x8000, mode=100640: dev=8/3, ino=65510, nlink=1, u/gid=0/0, size=12288 9: fl=0x8000, mode=100640: dev=8/3, ino=65510, nlink=1, u/gid=0/0, size=12288 10: fl=0x8000, mode=100640: dev=8/3, ino=64814, nlink=1, u/gid=0/51, size=12288 11: fl=0x8000, mode=100640: dev=8/3, ino=64814, nlink=1, u/gid=0/51, size=12288 12: fl=0x1, mode=100600: dev=8/3, ino=486767, nlink=1, u/gid=0/51, size=754 13: fl=0x1, mode=10600: FIFO: dev=0/5, ino=7649785, nlink=1, u/gid=0/51, size=0 14: fl=0x0, mode=10600: FIFO: dev=0/5, ino=7649786, nlink=1, u/gid=0/51, size=0 MCI@0x0: NULL MCI@0x0: NULL to="|/bin/echo foo", [email protected] (8/0), delay=00:00:08, xdelay=00:00:00, mailer=prog, pri=30476, dsn=5.0.0, stat=Service unavailable o6KMsnxX025948: DSN: Service unavailable done; delay=00:00:08, ntries=1 The alias in /etc/aliases is: cmtest: "|/bin/echo foo" As you can see, even when trying to pipe to /bin/echo I still get the same error. But I can't for the life of me figure out what else to check. Normal aliases work fine. Any ideas? Thanks!

    Read the article

  • Recover harddrive data

    - by gameshints
    I have a dell laptop that recently "died" (It would get the blue screen of death upon starting) and the hard drive would make a weird cyclic clicking noises. I wanted to see if I could use some tools on my linux machine to recover the data, so I plugged it into there. If I run "fdisk" I get: Disk /dev/sdb: 20.0 GB, 20003880960 bytes 64 heads, 32 sectors/track, 19077 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk identifier: 0x64651a0a Disk /dev/sdb doesn't contain a valid partition table Fine, the partition table is messed up. However if I run "testdisk" in attempt to fix the table, it freezes at this point, making the same cyclical clicking noises: Disk /dev/sdb - 20 GB / 18 GiB - CHS 19078 64 32 Analyse cylinder 158/19077: 00% I don't really care about the hard drive working again, and just the data, so I ran "gpart" to figure out where the partitions used to be. I got this: dev(/dev/sdb) mss(512) chs(19077/64/32)(LBA) #s(39069696) size(19077mb) * Warning: strange partition table magic 0x2A55. Primary partition(1) type: 222(0xDE)(UNKNOWN) size: 15mb #s(31429) s(63-31491) chs: (0/1/1)-(3/126/63)d (0/1/32)-(15/24/4)r hex: 00 01 01 00 DE 7E 3F 03 3F 00 00 00 C5 7A 00 00 Primary partition(2) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) (BOOT) size: 19021mb #s(38956987) s(31492-38988478) chs: (4/0/1)-(895/126/63)d (15/24/5)-(19037/21/31)r hex: 80 00 01 04 07 7E FF 7F 04 7B 00 00 BB 6F 52 02 So I tried to mount just to the old NTFS partition, but got an error: sudo mount -o loop,ro,offset=16123904 -t ntfs /dev/sdb /mnt/usb NTFS signature is missing. Ugh. Okay. But then I tried to get a raw data dump by running dd if=/dev/sdb of=/home/erik/brokenhd skip=31492 count=38956987 But the file got up to 59885568 bytes, and made the same cyclical clicking noises. Obviously there is a bad sector, but I don't know what to do about it! The data is still there... if I view that 57MB file in textpad... I can see raw data from files. How can I get my data back? Thanks for any suggestions, Solution: I was able to recover about 90% of my data: Froze harddrive in freezer Used Ddrescue to make a copy of the drive Since Ddrescue wasn't able to get enough of my drive to use testdisk to recover my partitions/file system, I ended up using photorec to recover most of my files

    Read the article

  • Vim configuration slow in Terminal & iTerm2 but not in MacVim

    - by Jey Balachandran
    Ideally, I want to use Vim from Terminal or iTerm2. However, it becomes unbearably slow so I had to resort to using MacVim. There is nothing wrong with MacVim, however my workflow would be much smoother if I used only Terminal/iTerm2. When its slow Loading files, in particular Rails files takes about 1 - 1.5s. Removing rails.vim decreases this time to 0.5 - 1s. In MacVim this is instantaneous. Scrolling through the rows and columns via h, j, k, l. It progressively gets slower the longer I hold down the keys. Eventually, it starts jumping rows. I have my Key Repeat set to Fast and Delay Until Repeat set to Short. After 10 - 15 minutes of usage, using plugins such as ctrlp or Command-T gets very laggy. I'd type a letter, wait 2 - 3s, then type the next. My Setup 11" MacBook Air running Mac OS X Version 10.7.3 (1.6 Ghz Intel Core 2 Duo, 4 GB DDR3) My dotfiles. > vim --version VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 16 2011 16:44:23) MacOS X (unix) version Included patches: 1-333 Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv -cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra -perl +persistent_undo +postscript +printer +profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/local/Cellar/vim/7.3.333/share/vim" Compilation: /usr/bin/llvm-gcc -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -no-cpp-precomp -O3 -march=core2 -msse4.1 -w -pipe -D_FORTIFY_SOURCE=1 Linking: /usr/bin/llvm-gcc -L. -L/usr/local/lib -o vim -lm -lncurses -liconv -framework Cocoa -framework Python -lruby I've tried running without any plugins or syntax highlighting. It opens files a lot faster but still not as fast as MacVim. But the other two problems still exist. Why is my vim configuration slow? How can I improve the speed of my vim configuration within Terminal or iTerm2?

    Read the article

  • How to know when a user has really released a key in Java?

    - by Luis Soeiro
    (Edited for clarity) I want to detect when a user presses and releases a key in Java Swing, ignoring the keyboard auto repeat feature. I also would like a pure Java approach the works on Linux, Mac OS and Windows. Requirements: When the user presses some key I want to know what key is that; When the user releases some key, I want to know what key is that; I want to ignore the system auto repeat options: I want to receive just one keypress event for each key press and just one key release event for each key release; If possible, I would use items 1 to 3 to know if the user is holding more than one key at a time (i.e, she hits 'a' and without releasing it, she hits "Enter"). The problem I'm facing in Java is that under Linux, when the user holds some key, there are many keyPress and keyRelease events being fired (because of the keyboard repeat feature). I've tried some approaches with no success: Get the last time a key event occurred - in Linux, they seem to be zero for key repeat, however, in Mac OS they are not; Consider an event only if the current keyCode is different from the last one - this way the user can't hit twice the same key in a row; Here is the basic (non working) part of code: import java.awt.event.KeyListener; public class Example implements KeyListener { public void keyTyped(KeyEvent e) { } public void keyPressed(KeyEvent e) { System.out.println("KeyPressed: "+e.getKeyCode()+", ts="+e.getWhen()); } public void keyReleased(KeyEvent e) { System.out.println("KeyReleased: "+e.getKeyCode()+", ts="+e.getWhen()); } } When a user holds a key (i.e, 'p') the system shows: KeyPressed: 80, ts=1253637271673 KeyReleased: 80, ts=1253637271923 KeyPressed: 80, ts=1253637271923 KeyReleased: 80, ts=1253637271956 KeyPressed: 80, ts=1253637271956 KeyReleased: 80, ts=1253637271990 KeyPressed: 80, ts=1253637271990 KeyReleased: 80, ts=1253637272023 KeyPressed: 80, ts=1253637272023 ... At least under Linux, the JVM keeps resending all the key events when a key is being hold. To make things more difficult, on my system (Kubuntu 9.04 Core 2 Duo) the timestamps keep changing. The JVM sends a key new release and new key press with the same timestamp. This makes it hard to know when a key is really released. Any ideas? Thanks

    Read the article

  • Validate an Xml file against a DTD with a proxy. C# 2.0

    - by Chris Dunaway
    I have looked at many examples for validating an XML file against a DTD, but have not found one that allows me to use a proxy. I have a cXml file as follows (abbreviated for display) which I wish to validate: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE cXML SYSTEM "http://xml.cxml.org/schemas/cXML/1.2.018/InvoiceDetail.dtd"> <cXML payloadID="123456" timestamp="2009-12-10T10:05:30-06:00"> <!-- content snipped --> </cXML> I am trying to create a simple C# program to validate the xml against the DTD. I have tried code such as the following but cannot figure out how to get it to use a proxy: private static bool isValid = false; static void Main(string[] args) { try { XmlTextReader r = new XmlTextReader(args[0]); XmlReaderSettings settings = new XmlReaderSettings(); XmlDocument doc = new XmlDocument(); settings.ProhibitDtd = false; settings.ValidationType = ValidationType.DTD; settings.ValidationEventHandler += new ValidationEventHandler(v_ValidationEventHandler); XmlReader validator = XmlReader.Create(r, settings); while (validator.Read()) ; validator.Close(); // Check whether the document is valid or invalid. if (isValid) Console.WriteLine("Document is valid"); else Console.WriteLine("Document is invalid"); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } } static void v_ValidationEventHandler(object sender, ValidationEventArgs e) { isValid = false; Console.WriteLine("Validation event\n" + e.Message); } The exception I receive is System.Net.WebException: The remote server returned an error: (407) Proxy Authentication Required. which occurs on the line while (validator.Read()) ; I know I can validate against a DTD locally, but I don't want to change the xml DOCTYPE since that is what the final form needs to be (this app is solely for diagnostic purposes). For more information about the cXML spec, you can go to cxml.org. I appreciate any assistance. Thanks

    Read the article

  • High load on X3220 Quad Core Linux Apache server

    - by John Templar
    I'm seriously in need of help. My sites are now nearly impossible to use because of massive loads on my server. I'm already a month late on my mortgage and this really isn't helping my situation. I've been working on fixing this intermittent load problem for months (never this bad). I'm suspecting some kind of attack since I'm under DDOS attack a lot! I've been trying to figure out what is causing the load but I'm afraid I just don't have the experience or knowledge to understand all the data I've been looking at. I don't even know where to begin or how to test for the large array of attacks out there. Here's some data you might find useful... Server: Xeon X3220 Quad Core 2.4 GHz - Linux, FreeBSD 500 GB HD and 8 Gig of Ram. Runs Centos release 5.7 Server Version: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_qos/9.74 Warning: All sites are softcore adult sites - mostly fantasy art like elves and amazons. 1) Sites may run fine for weeks or just days at less than 10 load then start jumping to 40-80 load - no idea why. Same sites, same mods, same amount of traffic - just WHAM! 2) I get an email almost every day that says: "Large Number of Failed Login Attempts from IP (different each time)". My webhost (who almost never helps me) told me it was a udp flood or something. 3) I've changed the port for MySQL from the default. If I ever put it back to the default - I get Loads of over 100 from what must be a constant mysql port flood. 4) I've reconfigured MYSQL. Link: http://www.deadlyamazons.com/logs/mycnf.txt 5) I have 3 Joomla Jomsocial networks. I've spent a couple weeks turning all the mods/plugins off, waiting a day and then turning them back on the next day or later if there isn't any change (there hasn't been). For example, on Thursday I'll turn off videos, on Friday I'll turn off chat.. etc and nothing changes the load appreciably. 6) Joomla info: All SEF turned off - sh404sef completely disabled and removed. Components: Joomla 1.5.22, Jomsocial 2.0.5, Kunena 1/31/2011, HWDMediashare 11/22/2010 and JBolo Chat 2.7.3, Comet Chat or Envolve Chat. Page Compression is on, Cache is on 15 mins. Please click on this forum to see links to all my reports: http://forum.joomla.org/viewtopic.php?f=433&t=706035&p=2777500#p2777500 Any help would be highly appreciated.

    Read the article

  • UCARP: prevent the original master from taking over the VIP when it comes back after failure?

    - by quanta
    Keepalived can do this by combining the nopreempt option and the BACKUP state on the both nodes: Prevent VRRP Master from becoming Master once it has failed Prevent master to fall back to master after failure How about the UCARP? Name : ucarp Arch : x86_64 Version : 1.5.2 Release : 1.el5.rf Size : 81 k Repo : installed Summary : Common Address Redundancy Protocol (CARP) for Unix URL : http://www.ucarp.org/ License : BSD Description: UCARP allows a couple of hosts to share common virtual IP addresses in order : to provide automatic failover. It is a portable userland implementation of the : secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's : alternative to the patents-bloated VRRP). : Strong points of the CARP protocol are: very low overhead, cryptographically : signed messages, interoperability between different operating systems and no : need for any dedicated extra network link between redundant hosts. If I don't use the --preempt option and set the --advskew to the same value, both nodes become master. /etc/sysconfig/carp/vip-010.conf # Virtual IP configuration file for UCARP # The number (from 001 to 255) in the name of the file is the identifier # $Id: vip-001.conf.example 1527 2004-07-09 15:23:54Z dude $ # Set the same password on all mamchines sharing the same virtual IP PASSWORD="pa$$w0rd" # You are required to have an IPADDR= line in the configuration file for # this interface (so no DHCP allowed) BIND_INTERFACE="eth0" # Do *NOT* use a main interface for the virtual IP, use an ethX:Y alias # with the corresponding /etc/sysconfig/network-scripts/ifcfg-ethX:Y file # already configured and ith ONBOOT=no VIP_INTERFACE="eth0:0" # If you have extra options to add, see "ucarp --help" output # (the lower the "-k <val>" the higher priority and "-P" to become master ASAP) OPTIONS="-z -k 255" /etc/sysconfig/network-scripts/ifcfg-eth0:0 DEVICE=eth0:0 ONBOOT=no BOOTPROTO= IPADDR=192.168.6.8 NETMASK=255.255.255.0 USERCTL=yes IPV6INIT=no node 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether c6:9b:8e:af:a7:69 brd ff:ff:ff:ff:ff:ff inet 192.168.6.192/24 brd 192.168.6.255 scope global eth0 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth0:0 inet6 fe80::c49b:8eff:feaf:a769/64 scope link valid_lft forever preferred_lft forever node 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:30:48:f7:0f:81 brd ff:ff:ff:ff:ff:ff inet 192.168.6.38/24 brd 192.168.6.255 scope global eth1 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth1:0 inet6 fe80::230:48ff:fef7:f81/64 scope link valid_lft forever preferred_lft forever

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • Magento user created attribute for products is not saved...

    - by Elzo Valugi
    I am fighting an apparently simple thing for about two days now. I hope somebody can help. I created the myUpdate EAV attribute class Company_Module_Model_Resource_Eav_Mysql4_Setup extends Mage_Eav_Model_Entity_Setup { public function getDefaultEntities() { return array( 'catalog_product' => array( 'entity_model' => 'catalog/product', 'attribute_model' => 'catalog/resource_eav_attribute', 'table' => 'catalog/product', 'additional_attribute_table' => 'catalog/eav_attribute', 'entity_attribute_collection' => 'catalog/product_attribute_collection', 'attributes' => array( 'my_update' => array( 'label' => 'My timestamp', 'type' => 'datetime', 'input' => 'date', 'default' => '', 'class' => 'validate-date', 'backend' => 'eav/entity_attribute_backend_datetime', 'frontend' => '', 'table' => '', 'source' => '', 'global' => Mage_Catalog_Model_Resource_Eav_Attribute::SCOPE_GLOBAL, 'visible' => true, 'required' => false, 'user_defined' => true, 'searchable' => false, 'filterable' => false, 'comparable' => false, 'visible_on_front' => false, 'visible_in_advanced_search' => false, 'unique' => false, 'apply_to' => 'simple', ) ) ) ); } } The attribute is created OK and on install appears correctly in the list of attributes. Later I do // for product 1 $product->setMyUpdate($stringDate); // string this format: yyyy-MM-dd HH:mm:ss $product->save(); // and saves without issues * in admin module But later when I do: $product = Mage::getModel('catalog/product')->load(1); var_dump($product->getMyUpdate()); // returns null Somehow the data is not really saved.. or I am not retrieving it correctly. Please advice on how to get the data and where the data is saved in the DB so I can check at if the insert is done correctly. Thanks.

    Read the article

  • Preventing 'Reply-All' to Exchange Distribution Groups

    - by Larold
    This is another question in a short series regarding a challenging Exchange project my co-workers have been asked to implement. (I'm helping even though I'm primarily a Unix guy because I volunteered to learn powershell and implement as much of the project in code as I could.) Background: We have been asked to create many distribution groups, say about 500+. These groups will contain two types of members. (Apologies if I get these terms wrong.) One type will be internal AD users, and the other type will be external users that I create Mail Contact entries for. We have been asked to make it so that a "Reply All" is not possible to any messages sent to these groups. I don't believe that is 100% possible to enforce for the following reasons. My question is - is my following reasoning sound? If not, please feel free to educate me on if / how things can properly be implemeneted. Thanks! My reasoning on why it's impossible to prevent 100% of potential reply-all actions: An interal AD user could put the DL in their To: field. They then click the '+' to expand the group. The group contains two external mail contacts. The message is sent to everyone, including those external contacts. External user #1 decides to reply-all, and his mail goes to, at least, external user #2, which wouldn't even involve our Exchange mail relays. An internal AD user could place the DL in their Outlook To: field, then click the '+' button to expand the DL. They then fire off an email to everyone that was in the group. (But the individual addresses are listed in the 'To:' field.) Because we now have a message sent to multiple recipients in the To: field, the addresses have been "exposed", and anyone is free to reply-all, and the messages just get sent to everyone in the To: field. Even if we try to set a Reply-To: field for all of these DLs, external mail clients are not obligated to abide by it, or force users to abide by it. Are my two points above valid? (I admit, they are somewhat similar.) Am I correct to tell our leadership "It is not possible to prevent 100% of the cases where someone will want to Reply-All to these groups UNLESS we train the users sending emails to these groups that the Bcc: field is to be used at all times." I am dying for any insight or parts of the equation I'm not seeing clearly. Thank you!!!

    Read the article

  • I could use some help with my SQL command

    - by SuperSpy
    I've got a database table called 'mesg' with the following structure: receiver_id | sender_id | message | timestamp | read Example: 2 *(«me)* | 6 *(«nice girl)* | 'I like you, more than ghoti' | yearsago | 1 *(«seen it)* 2 *(«me)* | 6 *(«nice girl)* | 'I like you, more than fish' | now | 1 *(«seen it)* 6 *(«nice girl)* | 2 *(«me)* | 'Hey, wanna go fish?' | yearsago+1sec | 0 *(«she hasn't seen it)* It's quite a tricky thing that I want to achieve. I want to get: the most recent message(=ORDER BY time DESC) + 'contact name' + time for each 'conversation'. Contact name = uname WHERE uid = 'contact ID' (the username is in another table) Contact ID = if(sessionID*(«me)*=sender_id){receiver_id}else{sender_id} Conversation is me = receiver OR me = sender For example: From: **Bas Kreuntjes** *(« The message from bas is the most recent)* Hey $py, How are you doing... From: **Sophie Naarden** *(« Second recent)* Well hello, would you like to buy my spam? ... *(«I'll work on that later >p)* To: **Melanie van Deijk** *(« My message to Melanie is 3th)* And? Did you kiss him? ... That is a rough output. QUESTION: Could someone please help me setup a good SQL command. This will be the while loup <?php $sql = "????"; $result = mysql_query($sql); while($fetch = mysql_fetch_assoc($result)){ ?> <div class="message-block"> <h1><?php echo($fetch['uname']); ?></h1> <p><?php echo($fetch['message']); ?></p> <p><?php echo($fetch['time']); ?></p> </div> <?php } ?> I hope my explanation is good enough, if not, please tell me. Please don't mind my English, and the Dutch names (I am Dutch myself) Feel free to correct my English UPDATE1: Best I've got until now: But I do not want more than one conversation to show up... u=user table m=message table SELECT u.uname, m.message, m.receiver_uid, m.sender_uid, m.time FROM m, u WHERE (m.receiver_uid = '$myID' AND u.uid = m.sender_uid) OR (m.sender_uid = '$myID' AND u.uid = m.receiver_uid) ORDER BY time DESC;

    Read the article

  • Hibernate updating records and implementing listeners : getting only required attribute values for event.getOldState()

    - by Narendra
    Hi All, I am using Hibernate 3 as my persistence framework. Below is the sample hbm file I am using. <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.test.User" table="user"> <meta attribute="implements">com.test.dao.interfaces.IEntity</meta> <id name="key" type="long" column="user_key"> <generator class="increment" /> </id> <property name="userName" column="user_name" not-null="true" type="string" /> <property name="password" column="password" not-null="true" type="string" /> <property name="firstName" column="first_name" not-null="true" type="string" /> <property name="lastName" column="last_name" not-null="true" type="string" /> <property name="createdDate" column="created_date" not-null="true" type="timestamp" insert="false" update="false" /> <property name="createdBy" column="created_by" not-null="true" type="string" update="false" /> </class> </hibernate-mapping> I am added a post-update listener. What it will do is if there any updations perfomed on User then it will be invoked and cahnges will be inserted to audit table. Below is the sample implementation for postupdate event. public void onPostUpdate(PostUpdateEvent event) { LogHelper.info(logger, "Begin - onPostUpdate " + event.getEntity().getClass().getSimpleName()); if (!this.checkForAudit(event.getEntity().getClass().getSimpleName())) { // check do we need to audit it. } // Get Attribute Names String[] attrNames = event.getPersister().getEntityMetamodel() .getPropertyNames(); Object[] oldobjectValue = c Object[] newObjectValue = event.getState(); this.auditDetailsEvent(attrNames, oldobjectValue, newObjectValue); LogHelper.info(logger, "End - onPostUpdate"); // return false; } Here is my requirement. event.getPersister().getEntityMetamodel() .getPropertyNames(); or event.getOldState(); or event.getState(); must return attribute names or value which i can update or insert. Is there any way to control the return values of above one's. Pleas help me on this regard. Thanks, Narendra

    Read the article

  • Python and C++ Sockets converting packet data

    - by yeus
    First of all, to clarify my goal: There exist two programs written in C in our laboratory. I am working on a Proxy Server (bidirectional) for them (which will also mainpulate the data). And I want to write that proxy server in Python. It is important to know that I know close to nothing about these two programs, I only know the definition file of the packets. Now: assuming a packet definition in one of the C++ programs reads like this: unsigned char Packet[0x32]; // Packet[Length] int z=0; Packet[0]=0x00; // Spare Packet[1]=0x32; // Length Packet[2]=0x01; // Source Packet[3]=0x02; // Destination Packet[4]=0x01; // ID Packet[5]=0x00; // Spare for(z=0;z<=24;z+=8) { Packet[9-z/8]=((int)(720000+armcontrolpacket->dof0_rot*1000)/(int)pow((double)2,(double)z)); Packet[13-z/8]=((int)(720000+armcontrolpacket->dof0_speed*1000)/(int)pow((double)2,(double)z)); Packet[17-z/8]=((int)(720000+armcontrolpacket->dof1_rot*1000)/(int)pow((double)2,(double)z)); Packet[21-z/8]=((int)(720000+armcontrolpacket->dof1_speed*1000)/(int)pow((double)2,(double)z)); Packet[25-z/8]=((int)(720000+armcontrolpacket->dof2_rot*1000)/(int)pow((double)2,(double)z)); Packet[29-z/8]=((int)(720000+armcontrolpacket->dof2_speed*1000)/(int)pow((double)2,(double)z)); Packet[33-z/8]=((int)(720000+armcontrolpacket->dof3_rot*1000)/(int)pow((double)2,(double)z)); Packet[37-z/8]=((int)(720000+armcontrolpacket->dof3_speed*1000)/(int)pow((double)2,(double)z)); Packet[41-z/8]=((int)(720000+armcontrolpacket->dof4_rot*1000)/(int)pow((double)2,(double)z)); Packet[45-z/8]=((int)(720000+armcontrolpacket->dof4_speed*1000)/(int)pow((double)2,(double)z)); Packet[49-z/8]=((int)armcontrolpacket->timestamp/(int)pow(2.0,(double)z)); } if(SendPacket(sock,(char*)&Packet,sizeof(Packet))) return 1; return 0; What would be the easiest way to receive that data, convert it into a readable python format, manipulate them and send them forward to the receiver?

    Read the article

  • Nginx + Wordpress Multisite 3.4.2 + subdirectories + static pages and permalinks

    - by UrkoM
    I am trying to setup Wordpress Multisite, using subdirectories, with Nginx, php5-fpm, APC, and Batcache. As many other people, I am getting stuck in the rewrite rules for permalinks. I have followed these two guides, which seem to be as official as you can get: http://evansolomon.me/notes/faster-wordpress-multisite-nginx-batcache/ http://codex.wordpress.org/Nginx#WordPress_Multisite_Subdirectory_rules It is partially working: http://blog.ssis.edu.vn works. http://blog.ssis.edu.vn/umasse/ works. But other permalinks, like these two to a post or to a static page, don't work: http://blog.ssis.edu.vn/umasse/2008/12/12/hello-world-2/ http://blog.ssis.edu.vn/umasse/sample-page/ They either take you to a 404 error, or to some other blog! Here is my configuration: server { listen 80 default_server; server_name blog.ssis.edu.vn; root /var/www; access_log /var/log/nginx/blog-access.log; error_log /var/log/nginx/blog-error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Add trailing slash to */username requests rewrite ^/[_0-9a-zA-Z-]+$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { # Forbid PHP on upload dirs if ($uri ~ "uploads") { return 403; } client_max_body_size 25M; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Any ideas are welcome! Have I done something wrong? I have disabled Batcache to see if it makes any difference, but still no go.

    Read the article

  • IE browser caching and the jQuery Form Plugin

    - by Harfleur
    Like so many lost souls before me, I'm floundering in the snake pit that is Ajax form submission and IE browser caching. I'm trying to write a simple script using the jQuery Form Plugin to Ajaxify Wordpress comments. It's working fine in Firefox, Chrome, Safari, et. al., but in IE, the response text is cached with the result that Ajax is pulling in the wrong comment. jQuery(this).ajaxSubmit({ success: function(data) { var response = $("<ol>"+data+"</ol>"); response.find('.commentlist li:last').hide().appendTo(jQuery('.commentlist')).slideDown('slow'); } }); ajaxSubmit sends the comment to wp-comments-post.php, which inelegantly spits back the entire page as a response. So, despite the fact that it's ugly as toads, I'm sticking the response text in a variable, using :last to isolate the most recent comment, and sliding it down in its place. IE, however, is returning the cached version of the page, which doesn't include the new comment. So ".commentlist li:last" selects the previous comment, a duplicate of which then uselessly slides down beneath the original. I've tried setting "cache: false" in the ajaxSubmit options, but it has no effect. I've tried setting a url option and tacking on a random number or timestamp, but it winds up being attached to the POST that submits the comment to the server rather than the GET that returns the response, and so has no effect. I'm not sure what else to try. Everything works fine in IE if I turn off browser caching, but that's obviously not something I can expect anyone viewing the page to do. Any help will be hugely appreciated. Thanks in advance! EDIT WITH A PROGRESS REPORT: A couple of people have suggested using PHP headers to prevent caching, and this does indeed work. The trouble is that wp-comments-post is spitting back the entire page when a new comment is submitted, and the only way I can see to add headers is to put them in the Wordpress post template, which disables caching on all posts at all times--not quite the behavior I'm looking for. Is there a way to set a php conditional--"if is_ajax" or something like that--that would keep the headers from being applied during regular pageloads, but plug them in if the page was called by an Ajax GET?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >