Search Results

Search found 2109 results on 85 pages for 'gis man'.

Page 69/85 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • broken upgrade from 10.04 to 12.04 on a VPS - recoverable?

    - by HorusKol
    I have a VPS hosted 1500 km away. It originally came with 9.10 - and this morning I decided that I really should get to an LTS release, and figured I'd jump to 12.04. Researching, I discovered that there is no direct path between 9.10 and 12.04, but that I could upgrade via 10.04. After backing up my data, I dove in. The upgrade to 10.04 was successful, and I proceeded to upgrade to 12.04. Things started to go wrong. First, I got an error with GLIBC - I retried and got the same error. That's when I stopped the upgrade. I then tried another round of apt-get update && apt-get upgrade and got a list of "unmet dependencies": apt: Depends: ubuntu-keyring but it is not going to be installed Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed PreDepends: dpkg (>= 1.15.7.2) but 1.15.5.6ubuntu4.6 is to be installed apt-utils: Depends: libapt-pkg-libc6.10-6-4.8 libapt-inst1.4: Depends: libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed libapt-pkg4.12: Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed libc6: Depends: libc-bin (= 2.11.1-0ubuntu7.11) but 2.15-0ubuntu10.2 is to be installed libept0: Depends: libapt-pkg-libc6.10-6-4.8 libnih-dbus1: Depends: libnih1 (= 1.0.3-4ubuntu9) but 1.0.1-1 is to be installed I tried to see if I could do something about these - using apt-get -f install. This told me that I would need to upgrade my kernel. I found instructions on how to do this, but when I ran apt-get to install the new linux headers, I got the same dependency errors. I found another answer here where someone else had had an interruption in their upgrade - and tried the solution that worked for them: sudo apt-get -f dist-upgrade This resulted in the error: E: Could not perform immediate configuration on 'python2.7-minimal'.Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) I tried to resolve this by: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal But this simply ended up with this last list of dependency errors: apt: Depends: ubuntu-keyring but it is not going to be installed Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed PreDepends: dpkg (>= 1.15.7.2) but 1.15.5.6ubuntu4.6 is to be installed apt-utils: Depends: libapt-pkg-libc6.10-6-4.8 libapt-inst1.4: Depends: libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed libapt-pkg4.12: Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed libc6: Depends: libc-bin (= 2.11.1-0ubuntu7.11) but 2.15-0ubuntu10.2 is to be installed libept0: Depends: libapt-pkg-libc6.10-6-4.8 libnih-dbus1: Depends: libnih1 (= 1.0.3-4ubuntu9) but 1.0.1-1 is to be installed python: Depends: python-minimal (= 2.6.5-0ubuntu1) but 2.7.3-0ubuntu2 is to be installed python-apt: Depends: libapt-pkg-libc6.10-6-4.8 python-minimal: Depends: python2.7-minimal (>= 2.7.3) but it is not going to be installed Breaks: python-support (< 1.0.10ubuntu2) but 1.0.4ubuntu1 is to be installed synaptic: Depends: libapt-pkg-libc6.10-6-4.8 Any ideas on how to dig out of this hole?

    Read the article

  • There are 2 jobs available - which one sounds better all round [closed]

    - by Steve Gates
    I am currently employed at a company where we scrape by each year breaking even, sometimes having a little profit. The development environment is very relaxed and we have a laugh. My colleagues are not interested in improving their knowledge unless they have to, so trying to get them to adopt things like TDD is a non-starter. My development manager is stuck in .Net 2 land and refuses to use things like LINQ. He over complicates architecture and writes very unreadable code, heres an example SortedList<int,<SortedList<int,SortedList<int, MyClass>>>> The MD of the company has no drive and lets the one sales guy bring in the contracts. We are not busy all the time and this allows me time to look at new technology and learn. In terms of using things like TDD, my development manager has no problem with it and can kind of see the purpose of it, he just wont use it himself. This means I am alone in learning new things and am often resorting to StackOverflow to make sure I get things right. The company has a lot of flexibility, I can work from home if needs be and when my daughter was born they let me work from home 1 day a week however they expect this flexibility in return often asking me to travel occasionally on a Friday afternoon for the following week. Sometimes its abroad. We are also pretty much on call 24/5 as we have engineers in various countries. Also we have no testers so most of the testing is done by us developers and some testing by engineers. Either way no-one likes testing! I have been offered a role at a company I worked at 5 years ago. They were quite Victorian in their working practices but it appears to have relaxed now although I suspect still reasonably formal. There is a new team of developers I don't know and they are about to move to new offices. The team lead is a guy that was there when I was and I get the impression he takes his role seriously and likes his formal procedures and documentation. I think some of the Victorian practices may have rubbed off on him. However he did say if things crop up then as long as I can trust the person they can work at home although he prefers people in the office. The team uses SCRUM, TDD and SOLID design principles so they are quite up to date in technology. They are reasonably Microsoft focused. It appears the Technical Director might be the R&D man and research new technology on his own not allowing developers to play with new technology. He possibly might be a super developer and makes all the decisions that no can argue with. They are currently moving to Entity Framework away from NHibernate based on issues that their queries seem to fail sometimes and they feel NHibernate is stagnant. They have analysts and a QA team. The MD is focused and they are an expanding company making profit each year. I'm not sure what the team morale is and whether they have a laugh. When I had a tour around the office they were there in dead silence. I'm really unsure which role is the best for me and going with my gut instinct is useless as I'm not sure what my gut is telling me. Based on the information above which role would you choose and why?

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • cannot install firmware-b43-installer

    - by unknown
    output i get when installing the installArchives() failed: Preconfiguring packages ... Preconfiguring packages ... Preconfiguring packages ... Selecting previously deselected package menu. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 216125 files and directories currently installed.) Unpacking menu (from .../menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package wifi-radar. Unpacking wifi-radar (from .../wifi-radar_2.0.s05-1.2_all.deb) ... Processing triggers for man-db ... Processing triggers for install-info ... Processing triggers for doc-base ... Processing 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for python-gmenu ... Rebuilding /usr/share/applications/desktop.en_US.UTF8.cache... Processing triggers for python-support ... Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:30-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 No apport report written because MaxReports is reached already Setting up menu (2.1.44ubuntu1) ... Processing triggers for menu ... Setting up wifi-radar (2.0.s05-1.2) ... Processing triggers for menu ... Errors were encountered while processing: firmware-b43-installer Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:33-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 same thing occurs when i try to install any of the wireless application. All other software installs and the same error when trying to install firmware. I tried to go the link(http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2) and download the package but found no make file found in his package. please help me.

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • Encryption is hard: AES encryption to Hex

    - by Rob Cameron
    So, I've got an app at work that encrypts a string using ColdFusion. ColdFusion's bulit-in encryption helpers make it pretty simple: encrypt('string_to_encrypt','key','AES','HEX') What I'm trying to do is use Ruby to create the same encrypted string as this ColdFusion script is creating. Unfortunately encryption is the most confusing computer science subject known to man. I found a couple helper methods that use the openssl library and give you a really simple encryption/decryption method. Here's the resulting string: "\370\354D\020\357A\227\377\261G\333\314\204\361\277\250" Which looks unicode-ish to me. I've tried several libraries to convert this to hex but they all say it contains invalid characters. Trying to unpack it results in this: string = "\370\354D\020\357A\227\377\261G\333\314\204\361\277\250" string.unpack('U') ArgumentError: malformed UTF-8 character from (irb):19:in `unpack' from (irb):19 At the end of the day it's supposed to look like this (the output of the ColdFusion encrypt method): F8E91A689565ED24541D2A0109F201EF Of course that's assuming that all the padding, initialization vectors, salts, cypher types and a million other possible differences all line up. Here's the simple script I'm using to encrypt/decrypt: def aes(m,k,t) (aes = OpenSSL::Cipher::Cipher.new('aes-256-cbc').send(m)).key = Digest::SHA256.digest(k) aes.update(t) << aes.final end def encrypt(key, text) aes(:encrypt, key, text) end def decrypt(key, text) aes(:decrypt, key, text) end Any help? Maybe just a simple option I can pass to OpenSSL::Cipher::Cipher that will tell it to hex-encode the final string?

    Read the article

  • NSTextField autocomplete

    - by Rasmus Styrk
    Does anyone know of any class or lib that can implement autocompletion to an NSTextField? I'am trying to get the standard autocmpletion to work but it is made as a synchronous api. I get my autocompletion words via an api call over the internet. What have i done so far is: - (void)controlTextDidChange:(NSNotification *)obj { if([obj object] == self.searchField) { [self.spinner startAnimation:nil]; [self.wordcompletionStore completeString:self.searchField.stringValue]; if(self.doingAutocomplete) return; else { self.doingAutocomplete = YES; [[[obj userInfo] objectForKey:@"NSFieldEditor"] complete:nil]; } } } When my store is done, i have a delegate that gets called: - (void) completionStore:(WordcompletionStore *)store didFinishWithWords:(NSArray *)arrayOfWords { [self.spinner stopAnimation:nil]; self.completions = arrayOfWords; self.doingAutocomplete = NO; } The code that returns the completion list to the nstextfield is: - (NSArray *)control:(NSControl *)control textView:(NSTextView *)textView completions:(NSArray *)words forPartialWordRange:(NSRange)charRange indexOfSelectedItem:(NSInteger *)index { *index = -1; return self.completions; } My problem is that this will always be 1 request behind and the completion list only shows on every 2nd char the user inputs. I have tried searching google and SO like a mad man but i cant seem to find any solutions.. Any help is much appreciated.

    Read the article

  • How to edit the XSL for RSS Viewer Webpart

    - by Nagendra
    I am using a blog site as a source for my RSS Feed. As I see the RSS feed, its showing up as the following :: Blog: Posts Test Thursday, March 04, 2010 - Body: With 25 four's and 3 sixers Sachin crosses 200 (147 balls) runs in an single ODI innings. Creates another world record. Watch the final over where he got it double hundred with MSD on the other end. This is what he had to say after getting the MOM (man of the match): I dedicate this knock to all the people of India, who have supported me throughout over the last 20 years. I was timing the ball well, and I felt that anywhere between 340 to 350 was a good target. I thought Karthik, Yusuf and Dhoni supported me well. I thought that a 200 would be possible once I crossed 175 in the 42nd over. I am enjoying my cricket at the moment. There have been a few bad decisions I have made as a batsman, but as long as the passion is there I will carry on. It feels good that I lasted the 50 overs, it was a good test of my fitness and I would like to do this once again. Well!!! Wait for more. Published: 3/4/2010 3:18 PM More... I actually wanted to remove the Body, Published parameters. I just want my XSLT to be able to show only the Description of the blog. No need to have this meta data. Can anyone help me in specifying tthe XSL changes?

    Read the article

  • How to insert inline content from one FlowDocument into another?

    - by Robert Rossney
    I'm building an application that needs to allow a user to insert text from one RichTextBox at the current caret position in another one. I spent a lot of time screwing around with the FlowDocument's object model before running across this technique - source and target are both FlowDocuments: using (MemoryStream ms = new MemoryStream()) { TextRange tr = new TextRange(source.ContentStart, source.ContentEnd); tr.Save(ms, DataFormats.Xaml); ms.Seek(0, SeekOrigin.Begin); tr = new TextRange(target.CaretPosition, target.CaretPosition); tr.Load(ms, DataFormats.Xaml); } This works remarkably well. The only problem I'm having with it now is that it always inserts the source as a new paragraph. It breaks the current run (or whatever) at the caret, inserts the source, and ends the paragraph. That's appropriate if the source actually is a paragraph (or more than one paragraph), but not if it's just (say) a line of text. I think it's likely that the answer to this is going to end up being checking the target to see if it consists entirely of a single block, and if it does, setting the TextRange to point at the beginning and end of the block's content before saving it to the stream. The entire world of the FlowDocument is a roiling sea of dark mysteries to me. I can become an expert at it if I have to (per Dostoevsky: "Man is the animal who can get used to anything."), but if someone has already figured this out and can tell me how to do this it would make my life far easier.

    Read the article

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • Adding a Contact with the Google Contacts .NET API

    - by Bryan
    I am using the following code to add a contact, but I get the following unhandled exception: Google.GData.Client.GDataRequestException: Execution of request failed: http://www.google.com/m8/feeds/contacts/default/full GDataCredentials myCred = new GDataCredentials("myusername", "mypassword"); RequestSettings myRequestSettings = new RequestSettings("macpapa-GoogleCodeTest3-1", myCred); ContactsRequest myContactRequest = new ContactsRequest(myRequestSettings); Contact myContact = new Contact(); myContact.Title = "Be Dazzle"; PhoneNumber myPhoneNumber = new PhoneNumber("805-453-6688"); myPhoneNumber.Rel = ContactsRelationships.IsGeneral; myPhoneNumber.Primary = true; myContact.Phonenumbers.Add(myPhoneNumber); EMail myEmail = new EMail("[email protected]", ContactsRelationships.IsHome); EMail myEmail2 = new EMail("[email protected]", ContactsRelationships.IsWork); myEmail.Primary = true; myContact.Emails.Add(myEmail); myContact.Emails.Add(myEmail2); PostalAddress postalAddress = new PostalAddress(); postalAddress.Value = "123 somewhere lane"; postalAddress.Primary = true; postalAddress.Rel = ContactsRelationships.IsHome; myContact.PostalAddresses.Add(postalAddress); Uri feedUri = new Uri(ContactsQuery.CreateContactsUri("default")); Contact createdContact = myContactRequest.Insert<Contact>(feedUri, myContact); Please offer any available suggestions. Thank you.

    Read the article

  • Getting past dates in HP-UX with ksh

    - by Alejandro Atienza Ramos
    Ok, so I need to translate a script from a nice linux & bash configuration to ksh in hp-ux. Each and every command expects a different syntax and i want to kill myself. But let's skip the rant. This is part of my script anterior=`date +"%Y%0m" -d '1 month ago'` I basically need to get a past date in format 201002. Never mind the thing that, in the new environment, %0m means "no zeroes", while actually in the other one it means "yes, please put that zero on my string". It doesn't even accept the "1 month ago". I've read the man date for HP-UX and it seems you just can't do date arithmetic with it. I've been looking around for a while but all i find are lengthy solutions. I can't quite understand that such a typical administrative task like adding dates needs so much fuss. Isn't there a way to convert my one-liner to, well, i don't know, another one? Come on, i've seen proposed solutions that used bc, had thirty plus lines and magic number all over the script. The simplest solutions seem to use perl... but i don't know how to modify them, as they're quite arcane. Thanks!

    Read the article

  • gcc: Do I need -D_REENTRANT with pthreads?

    - by stefanB
    On Linux (kernel 2.6.5) our build system calls gcc with -D_REENTRANT. Is this still required when using pthreads? How is it related to gcc -pthread option? I understand that I should use -pthread with pthreads, do I still need -D_REENTRANT? On a side note, is there any difference that you know off between the usage of REENTRANT between gcc 3.3.3 and gcc 4.x.x ? When I use -pthread gcc option I can see that _REENTRANT gets defined. Will omitting -D_REENTRANT from command line make any difference, for example could some objects be compiled without multithreaded support and then linked into binary that uses pthreads and will cause problems? I assume it should be ok just to use: g++ -pthread > echo | g++ -E -dM -c - > singlethreaded > echo | g++ -pthread -E -dM -c - > multithreaded > diff singlethreaded multithreaded 39a40 > #define _REENTRANT 1 We're compiling multiple static libraries and applications that link with the static libraries, both libraries and application use pthreads. I believe it was required at some stage in the past but want to know if it is still required. Googling hasn't returned any recent information mentioning -D_REENTRANT with pthreads. Could you point me to links or references discussing the use in recent version of kernel/gcc/pthread? Clarification: At the moment we're using -D_REENTRANT and -lpthread, I assume I can replace them with just g++ -pthread, looking at man gcc it sets the flags for both preprocessor and linker. Any thoughts?

    Read the article

  • pthreads: reader/writer locks, upgrading read lock to write lock

    - by ScaryAardvark
    I'm using read/write locks on Linux and I've found that trying to upgrade a read locked object to a write lock deadlocks. i.e. // acquire the read lock in thread 1. pthread_rwlock_rdlock( &lock ); // make a decision to upgrade the lock in threads 1. pthread_rwlock_wrlock( &lock ); // this deadlocks as already hold read lock. I've read the man page and it's quite specific. The calling thread may deadlock if at the time the call is made it holds the read-write lock (whether a read or write lock). What is the best way to upgrade a read lock to a write lock in these circumstances.. I don't want to introduce a race on the variable I'm protecting. Presumably I can create another mutex to encompass the releasing of the read lock and the acquiring of the write lock but then I don't really see the use of read/write locks. I might as well simply use a normal mutex. Thx

    Read the article

  • split video (avi/h264) on keyframe

    - by m.sr
    Hallo. I have a big video file. ffmpeg, tcprobe and other tool say, it is an h264-stream in an AVI-container. Now i'd like to cut out small chunks form the video. Problem: The index of the video seam corrupted/destroyed. I kind of fixed this via mplayer -forceidx -saveidx <IndexFile> <BigVideoFile>. The Problem here is, that I'm now stuck with mplayer/mencoder which can use this index file via -loadidx <IndexFile>. I have tried correcting the index like described in man aviindex (mplayer -frames 0 -saveidx mpidx broken.avi ; aviindex -i mpidx -o tcindex ; avimerge -x tcindex -i broken.avi -o fixed.avi), but this didn't fix my video - meaning that most tools i've tested couldn't search in the video file. Problem: I sut out parts of the video via following command: mencoder -loadidx in.idx -ss 8578 -endpos 20 -oac faac -ovc x264 -sws 9 -lavfopts format=mp4 -x264encopts <LotsOfOpts> -of lavf -vf scale=800:-10,harddup in.avi -o out.mp4. Now here the problem is, that some videos are corrupted at the beginning. I think this is because the fact, that i do not necessarily cut at keyframe. Questions: What is the best way to fix the index of an avi "inline" so that every tool can again work as expected with it? How can i split at the keyframes? Is there an mencoder-option for this? Are Keyframes coming in a frequency? How to find out this frequency? (So with a bit of math it should be possible to calculate the next keyframe and cut there) Is ther perhaps some completely other way to split this movie? Doing it by hand is no option, i've to cut out 1000+ chunks ... Thanks a lot!

    Read the article

  • Phonegap Screenshot plugin in Cordova 2.0.0

    - by ObjectiveJ
    I have set up the screenshot plugin from github, located here: https://github.com/phonegap/phonegap-plugins/tree/master/Android/Screenshot I set it up as instructed and with 1.8.1 of cordova. It worked and the screenshot was saved to the phone. However it fails with cordova 2.0.0. Screenshot.java code: https://github.com/phonegap/phonegap-plugins/blob/master/Android/Screenshot/src/org/apache/cordova/Screenshot.java Screenshot.js code: https://github.com/phonegap/phonegap-plugins/blob/master/Android/Screenshot/www/Screenshot.js Due to the advice of a very clever man called Simon MacDonald, I removed line 31 and 38 from the JS file shown above. However when I try to use the screenshot plugin with cordova 2.0.0 I receive these errors: ERROR: org.json.JSONException: Value undefined of type java.lang.String cannot be converted to JSONArray. Error: Status=8 Message=JSON error file:///android_asset/www/cordova-2.0.0.js: Line 938 : Error: Status=8 Message=JSON error Error: Status=8 Message=JSON error at file:///android_asset_/www/cordova-2.0.0.js:938 line 938 of the cordova.js is: // If error, then display error else { console.log("Error: Status="+v.status+" Message="+v.message); but im almost certain this is a compatibility error. Does anyone know a fix for this, or even a reason. Im abit lost. Any help is appreciated. I call the screenshot.js with this code: function takeScreenShot() { cordovaRef.exec("Screenshot.saveScreenshot"); } Any help massively appreciated.

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • HttpWebRequest: The request was aborted: The request was canceled.

    - by Emeka
    I've been working on developing a middle man application of sorts, which uploads text to a CMS backend using HTTP post requests for a series of dates (usually 7 at a time). I am using HttpWebRequest to accomplish this. It seems to work fine for the first date, but when it starts the second date I get the System.Net.WebException: The request was aborted: The request was canceled. I've searched around and found the following big clues: http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/0d0afe40-c62a-4089-9d8b-fb4d206434dc http://www.jaxidian.org/update/2007/05/05/8 http://arnosoftwaredev.blogspot.com/2006/09/net-20-httpwebrequestkeepalive-and.html And they haven't been too helpful. I've tried overloading the GetWebReuqest but that doesn't make sense because I don't make any use of that function. Here is my code: http://pastebin.org/115268 I get the error on line 245 after it has run successfully at least once. I'd appreciate any help I can get as this is the last step in a project I've been working on for sometime. This is my first C#/VS project so I'm open to any tips but I would like to focus on getting this problem solved first. THanks!

    Read the article

  • What is the difference between AF_INET and PF_INET constants?

    - by Denilson Sá
    Looking at examples about socket programming, we can see that some people use AF_INET while others use PF_INET. In addition, sometimes both of them are used at the same example. The question is: Is there any difference between them? Which one should we use? If you can answer that, another question would be... Why there are these two similar (but equal) constants? What I've discovered, so far: The socket manpage In (Unix) socket programming, we have the socket() function that receives the following parameters: int socket(int domain, int type, int protocol); The manpage says: The domain argument specifies a communication domain; this selects the protocol family which will be used for communication. These families are defined in <sys/socket.h>. And the manpage cites AF_INET as well as some other AF_ constants for the domain parameter. Also, at the NOTES section of the same manpage, we can read: The manifest constants used under 4.x BSD for protocol families are PF_UNIX, PF_INET, etc., while AF_UNIX etc. are used for address families. However, already the BSD man page promises: "The protocol family generally is the same as the address family", and subsequent standards use AF_* everywhere. The C headers The sys/socket.h does not actually define those constants, but instead includes bits/socket.h. This file defines around 38 AF_ constants and 38 PF_ constants like this: #define PF_INET 2 /* IP protocol family. */ #define AF_INET PF_INET Python The Python socket module is very similar to the C API. However, there are many AF_ constants but only one PF_ constant (PF_PACKET). Thus, in Python we have no choice but use AF_INET. I think this decision to include only the AF_ constants follows one of the guiding principles: "There should be one-- and preferably only one --obvious way to do it." (The Zen of Python)

    Read the article

  • Not able to use 7-Zip to compress stdin and output with stdout?

    - by acidzombie24
    I get the error "Not implemented". I want to compress a file using 7-Zip via stdin then take the data via stdout and do more conversions with my application. In the man page it shows this example: % echo foo | 7z a dummy -tgzip -si -so /dev/null I am using Windows and C#. Results: 7-Zip 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Creating archive StdOut System error: Not implemented Code: public static byte[] a7zipBuf(byte[] b) { string line; var p = new Process(); line = string.Format("a dummy -t7z -si -so "); p.StartInfo.Arguments = line; p.StartInfo.FileName = @"C:\Program Files\7-Zip\7z.exe"; p.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; p.StartInfo.CreateNoWindow = true; p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardError = true; p.StartInfo.RedirectStandardInput = true; p.Start(); p.StandardInput.BaseStream.Write(b, 0, b.Length); p.StandardInput.Close(); Console.Write(p.StandardError.ReadToEnd()); //Console.Write(p.StandardOutput.ReadToEnd()); return p.StandardOutput.BaseStream.ReadFully(); } Is there another simple way to read the file into memory? Right now I can 1) write to a temporary file and read (easy and can copy/paste some code) 2) use a file pipe (medium? I have never done it) 3) Something else.

    Read the article

  • POSIX AIO Library and Callback Handlers

    - by Charles Salvia
    According to the documentation on aio_read/write, there are basically 2 ways that the AIO library can inform your application that an async file I/O operation has completed. Either 1) you can use a signal, 2) you can use a callback function I think that callback functions are vastly preferable to signals, and would probably be much easier to integrate into higher-level multi-threaded libraries. Unfortunately, the documentation for this functionality is a mess to say the least. Some sources, such as the man page for the sigevent struct, indicate that you need to set the sigev_notify data member in the sigevent struct to SIGEV_CALLBACK and then provide a function handler. Presumably, the handler is invoked in the same thread. Other documentation indicates you need to set sigev_notify to SIGEV_THREAD, which will invoke the callback handler in a newly created thread. In any case, on my Linux system (Ubuntu with a 2.6.28 kernel) SIGEV_CALLBACK doesn't seem to be defined anywhere, but SIGEV_THREAD works as advertised. Unfortunately, creating a new thread to invoke the callback handler seems really inefficient, especially if you need to invoke many handlers. It would be better to use an existing pool of threads, similar to the way most network I/O event demultiplexers work. Some versions of UNIX, such as QNX, include a SIGEV_SIGNAL_THREAD flag, which allows you to invoke handlers using a specified existing thread, but this doesn't seem to be available on Linux, nor does it seem to even be a part of the POSIX standard. So, is it possible to use the POSIX AIO library in a way that invokes user handlers in a pre-allocated background thread/threadpool, rather than creating/destroying a new thread everytime a handler is invoked?

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • Diffie-Hellman in Silverlight

    - by cmaduro
    I am trying to devise a security scheme for encrypting the application level data between a silverlight client, and a php webservice that I created. Since I am dealing with a public website the information I am pulling from the service is public, but the information I'm submitting to the webservice is not public. There is also a back end to the website for administration, so naturally all application data being pushed and pulled from the webservice to the silverlight administration back end must also be encrypted. Silverlight does not support asymmetric encryption, which would work for the public website. Symmetric encryption would only work on the back end because users do not log in to the public website, so no password based keys could be derived. Still symmetric encryption would be great, but I cannot securely save the private key in the silverlight client. Because it would either have to be hardcoded or read from some kind of config file. None of that is considered secure. So... plan B. My final alternative would be then to implement the Diffie-Hellman algorithm, which supports symmetric encryption by means of key agreement. However Diffie-Hellman is vulnerable to man-in-the-middle attacks. In other words, there is no guarantee that either side is sure of each others identity, making it possible for communication to be intercepted and altered without the receiving party knowing about it. It is thus recommended to use a private shared key to encrypt the key agreement handshaking, so that the identity of either party is confirmed. This brings me back to my initial problem that resulted in me needing to use Diffie-Hellman, how can I use a private key in a silverlight client without hardcoding it either in the code or an xml file. I'm all out of love on this one... is there any answer to this?

    Read the article

  • What Is The Proper Location For One-Offs In VCS Repos?

    - by Joe Clark
    I have recently started using Mercurial as our VCS. Over the years, I have used RCS, CVS, and - for the last 5 years - SVN. Back 13 years ago, when I primarily used CVS and RCS, large projects went into CVS and one-offs were edited in place on the specific server and stored in RCS. This worked well as the one-offs were usually specific to the server and the servers were backed up nightly. Jump forward a decade and a lot of the one-off scripts became less centralized - they might be needed on any server at some random time. This was also OK, because now I was a begrudging SVN user. Everything (except for docs) got dumped into one repo. Jump to 2010. Now I am using Mercurial and am putting large projects in their own repo again. But what to do with the one-offs? The options as I see them: A repo for each script. It seems a bit cluttered to create a repo for every one page script that might get ran once a year. RCS Not an option. There are many possible servers that might need a specific script. Continuing to use SVN just for one-offs. No. There no advantage I see over the next option. Create a repo in Mercurial named "one-offs". This seems the most workable. The last option seems the best to me - however; is there a best practice regarding this? You also might be wondering if these scripts are truly one-offs if they will be reused. Some of them may be reused 6 months or a year from now - some, never. However, nearly all of them involve several man-hours of work due to either complex logic or extensive error checking. Simply discarding them is not efficient.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >