Search Results

Search found 5435 results on 218 pages for 'ones and zeroes'.

Page 208/218 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • How to resolve "HTTP/1.1 403 Forbidden" errors from iCal/CalDAV server after upgrade to Snow Leopard Server?

    - by morgant
    We recently upgraded our Open Directory Master & Replica to Mac OS X 10.6.4 Snow Leopard Server. We had a mismatched server FQDN & LDAP Search Base/Kerberos Realm, so we exported all users & groups, created the new Open Directory Master w/matching FQDN & Search Base/Realm, reimported users & groups, and re-bound all servers & workstations to the new OD Master. At the same time as all of this, we upgraded our iCal/CalDAV server to Mac OS X 10.6.4 Snow Leopard Server. Ever since doing so, we've seen the following issues with our iCal/CalDAV server and iCal clients on both Mac OS X 10.5 Leopard & Mac OS X 10.6: If a user attempts to move or delete an event (single or repeating) that was created prior to the upgrade to 10.6 Server, they get the following error: Access to "blah" in "blah" in account "blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. New users added to the directory get the following error when attempting to add their account to in iCal's preferences: The user "blah" has no configured pricipals. Confirm with your network administrator that your account has at least one CalDAV principal configured. Interestingly, we've since discovered that users seem to be able to delete individual events from an old repeating event without error, but that's a massive amount of work to get rid of a repeating event. I will note that we have not yet added an SRV record in DNS as instructed on page 19 of iCal_Server_Admin_v10.6.pdf. Further Investigation: In this particular case, a user is attempting to decline repeating events created prior to the upgrade to Snow Leopard Server. Granting the user full write access with sudo calendarserver_manage_principals --add-write-proxy users:user1 users:user2 (which did work) doesn't allow deletion of the events. Still get the usual error: Access to "blah blah" in "blah blah" in account "blah blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. The error that shows up in /var/log/caldavd/error.log on the iCal Server when attempting to delete one of the events is: 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.extensions#info] PUT /calendars/__uids__/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/calendar/XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.ics HTTP/1.1 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.scheduling.implicit#error] Cannot change ORGANIZER: UID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX And the error in /var/log/system.log on the client is: Mar 17 15:14:30 192-168-21-169-dhcp iCal[33509]: CalDAV CalDAVWriteEntityQueueableOperation failed: status 'HTTP/1.1 403 Forbidden' request:\n\nBEGIN:VCALENDAR^M\nVERSION:2.0^M\nPRODID:-//Apple Inc.//iCal 3.0//EN^M\nCALSCALE:GREGORIAN^M\nBEGIN:VTIMEZONE^M\nTZID:US/Eastern^M\nBEGIN:DAYLIGHT^M\nTZOFFSETFROM:-0500^M\nTZOFFSETTO:-0400^M\nDTSTART:20070311T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU^M\nTZNAME:EDT^M\nEND:DAYLIGHT^M\nBEGIN:STANDARD^M\nTZOFFSETFROM:-0400^M\nTZOFFSETTO:-0500^M\nDTSTART:20071104T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU^M\nTZNAME:EST^M\nEND:STANDARD^M\nEND:VTIMEZONE^M\nBEGIN:VEVENT^M\nSEQUENCE:5^M\nDTSTART;TZID=US/Eastern:20090117T094500^M\nDTSTAMP:20081227T143043Z^M\nSUMMARY:blah blah^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT:urn:uuid^M\n :XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:user@d^M\n omain.tld^M\nEXDATE;TZID=US/Eastern:20110319T094500^M\nDTEND;TZID=US/Eastern:20090117T183000^M\nRRULE:FREQ=WEEKLY;INTERVAL=1^M\nTRANSP:OPAQUE^M\nUID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nORGANIZER;CN="First Last":mailto:[email protected]^M\nX-WR-ITIPSTATUSML:UNCLEAN^M\nCREATED:20110317T191348Z^M\nEND:VEVENT^M\nEND:VCALENDAR^M\n\n\n... response:\nHTTP/1.1 403 Forbidden^M\nDate: Thu, 17 Mar 2011 19:14:30 GMT^M\nDav: 1, access-control, calendar-access, calendar-schedule, calendar-auto-schedule, calendar-availability, inbox-availability, calendar-proxy, calendarserver-private-events, calendarserver-private-comments, calendarserver-principal-property-search^M\nContent-Type: text/xml^M\nContent-Length: 134^M\nServer: Twisted/8.2.0 TwistedWeb/8.2.0 TwistedCalDAV/2.5 (iCal Server v12.56.21)^M\n^M\n<?xml version='1.0' encoding='UTF-8'?><error xmlns='DAV:'>^M\n <valid-attendee-change xmlns='urn:ietf:params:xml:ns:caldav'/>^M\n</error> One thing I have noticed, and I'm not sure if this has any real effect is that in many of these pre-Snow Leopard Server migration events, the ORGANIZER is specified like the following: ORGANIZER;CN=First Last:mailto:[email protected] But newer ones are more like one of the two following: ORGANIZER;CN=First Last;[email protected];SCHEDULE-STATUS=1 ORGANIZER;CN=First Last;[email protected]:urn:uuid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX iCal_Server_Admin_v10.6.pdf notes that the ".db.sqlite" files are completely disposable, they're merely a performance cache and are re-built on the fly, so are safe to delete. I did delete the one for the organizer's calendars and it took longer to process the attempted event delete while it rebuilt the database, but still errored out in the end. FWIW the error is thrown by this code: https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/scheduling/implicit.py Any further suggestions? I see lots of questions about this in my Google searches, but not solutions and this is a widespread problem on our iCal Server. Now, we mostly try to get users to ignore them (hence the amount of time this question has been open), but every now and then I dig in deeper trying to find the culprit and/or solution.

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • Anyone willing to help out a javascript n00b? :-)

    - by Splynx
    Since I am asking for a lot, and know it, the following is a wall of text for those who might show some interest and want to know a little before offering their help to me. First a little about my level of programming skills, and a little about what I ask for. Where I'm at: I am not totally new to Javascript, and have dabbled a little with PHP earlier - well have dabbled a lot with PHP in fact, but never got good at it because I program alone. And I have until now never used forums to get help etc. other that searching to see if anyone else had my problem before and what the solution was. So I am not a intuitive or talented programmer, I'm more of a very maticulate programmer and you would be surprised how far you can get with if else... (ok that's a joke hehe). My solutions are usually (I am guessing here) not the best ones - and slow I take it, and the code is usually too long and I have to look up most of the stuff I use (really a lot of it is not done in "freehand"). I have a LOT of experience with HTML and CSS, and have always done well formed markup, as well as I am really into x-browsing and always require that my work validates when it's done. I also worry about optimizing a lot, and work with sprites for images, minimize the number of http requests etc, using H1,H2 etc. where it is logically correct, as well as use the correct elements and not just div span or p it... So because I am a workhorse and very maticulate I can actually pull off some quite "advanced" features, but it's always the basics that bite me in the end. Not fully understanding the syntax and so on usually gives me problems. Have recently discovered jQuery - wich is a lot of fun.... But I want to use it for the DOM node manipulation/handling only. As I mentioned I worry about optimizing, and jQuery used for everything seems... well not optimal, it strikes me as doing it yourself when possible is faster than accesing another script that may take a whole lot of other considerations into perspective when handling your variables and objects (and I am just guessing here since I as explained know nothing). So thats where I'm at... As mentioned I just started with javascript for "real" so I do not have much to show, but at the end of my WOT you can see two unfinisheded scripts I have made so you can see where I'm at roughly - just check out the URL without the /feedback.html for the second example (I am only allowed to post 1 link since I am also a SO n00b) (and for those rushing over to a validation service, remember I wrote "when it's done"...) What I ask for: I am figuring this... I have a piece of code I am working on at the moment, and this little project has taught me a whole lot already, and I have "grown" a lot as a javascript programmer. If I add a whole lot of comments to the script, and explain what it is intended to do, will you then show me where: I am writing incorrect code - making mistakes Where/how my code could be more optimal Where I am just simply being a muppet The code I want to use as the background for the tuition is the one here http://projects.1000monkeys.dk/feedback.html Use firebug and have a quick look see...

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • Java Refuses to Start - Could not reserve enough space for object heap

    - by Randyaa
    Background We have a pool of aproximately 20 linux blades. Some are running Suse, some are running Redhat. ALL share NAS space which contains the following 3 folders: /NAS/app/java - a symlink that points to an installation of a Java JDK. Currently version 1.5.0_10 /NAS/app/lib - a symlink that points to a version of our application. /NAS/data - directory where our output is written All our machines have 2 processors (hyperthreaded) with 4gb of physical memory and 4gb of swap space. We limit the number of 'jobs' each machine can process at a given time to 6 (this number likely needs to change, but that does not enter into the current problem so please ignore it for the time being). Some of our jobs set a Max Heap size of 512mb, some others reserve a Max Heap size of 2048mb. Again, we realize we could go over our available memory if 6 jobs started on the same machine with the heap size set to 2048, but to our knowledge this has not yet occurred. The Problem Once and a while a Job will fail immediately with the following message: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. We used to chalk this up to too many jobs running at the same time on the same machine. The problem happened infrequently enough (MAYBE once a month) that we'd just restart it and everything would be fine. The problem has recently gotten much worse. All of our jobs which request a max heap size of 2048m fail immediately almost every time and need to get restarted several times before completing. We've gone out to individual machines and tried executing them manually with the same result. Debugging It turns out that the problem only exists for our SuSE boxes. The reason it has been happening more frequently is becuase we've been adding more machines, and the new ones are SuSE. 'cat /proc/version' on the SuSE boxes give us: Linux version 2.6.5-7.244-bigsmp (geeko@buildhost) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 12 18:32:25 UTC 2005 'cat /proc/version' on the RedHat boxes give us: Linux version 2.4.21-32.0.1.ELsmp ([email protected]) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 SMP Tue May 17 17:52:23 EDT 2005 'uname -a' gives us the following on BOTH types of machines: UTC 2005 i686 i686 i386 GNU/Linux No jobs are running on the machine, and no other processes are utilizing much memory. All of the processes currently running might be using 100mb total. 'top' currently shows the following: Mem: 4146528k total, 3536360k used, 610168k free, 132136k buffers Swap: 4194288k total, 0k used, 4194288k free, 3283908k cached 'vmstat' currently shows the following: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 610292 132136 3283908 0 0 0 2 26 15 0 0 100 0 If we kick off a job with the following command line (Max Heap of 1850mb) it starts fine: java/bin/java -Xmx1850M -cp helloworld.jar HelloWorld Hello World If we bump up the max heap size to 1875mb it fails: java/bin/java -Xmx1875M -cp helloworld.jar HelloWorld Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. It's quite clear that the memory currently being used is for Buffering/Caching and that's why so little is being displayed as 'free'. What isn't clear is why there is a magical 1850mb line where anything higher means Java can't start. Any explanations would be greatly appreciated.

    Read the article

  • How to generate a Program template by generating an abstract class

    - by Byron-Lim Timothy Steffan
    i have the following problem. The 1st step is to implement a program, which follows a specific protocol on startup. Therefore, functions as onInit, onConfigRequest, etc. will be necessary. (These are triggered e.g. by incoming message on a TCP Port) My goal is to generate a class for example abstract one, which has abstract functions as onInit(), etc. A programmer should just inherit from this base class and should merely override these abstract functions of the base class. The rest as of the protocol e.g. should be simply handled in the background (using the code of the base class) and should not need to appear in the programmers code. What is the correct design strategy for such tasks? and how do I deal with, that the static main method is not inheritable? What are the key-tags for this problem? (I have problem searching for a solution since I lack clear statements on this problem) Goal is to create some sort of library/class, which - included in ones code - results in executables following the protocol. EDIT (new explanation): Okay let me try to explain more detailled: In this case programs should be clients within a client server architecture. We have a client server connection via TCP/IP. Each program needs to follow a specific protocol upon program start: As soon as my program starts and gets connected to the server it will receive an Init Message (TcpClient), when this happens it should trigger the function onInit(). (Should this be implemented by an event system?) After onInit() a acknowledgement message should be sent to the server. Afterwards there are some other steps as e.g. a config message from the server which triggers an onConfig and so on. Let's concentrate on the onInit function. The idea is, that onInit (and onConfig and so on) should be the only functions the programmer should edit while the overall protocol messaging is hidden for him. Therefore, I thought using an abstract class with the abstract methods onInit(), onConfig() in it should be the right thing. The static Main class I would like to hide, since within it e.g. there will be some part which connects to the tcp port, which reacts on the Init Message and which will call the onInit function. 2 problems here: 1. the static main class cant be inherited, isn it? 2. I cannot call abstract functions from the main class in the abstract master class. Let me give an Pseudo-example for my ideas: public abstract class MasterClass { static void Main(string[] args){ 1. open TCP connection 2. waiting for Init Message from server 3. onInit(); 4. Send Acknowledgement, that Init Routine has ended successfully 5. waiting for Config message from server 6..... } public abstract void onInit(); public abstract void onConfig(); } I hope you get the idea now! The programmer should afterwards inherit from this masterclass and merely need to edit the functions onInit and so on. Is this way possible? How? What else do you recommend for solving this? EDIT: The strategy ideo provided below is a good one! Check out my comment on that.

    Read the article

  • How do I make these fields autopopulate from the database?

    - by dmanexe
    I have an array, which holds values like this: $items_pool = Array ( [0] => Array ( [id] => 1 [quantity] => 1 ) [1] => Array ( [id] => 2 [quantity] => 1 ) [2] => Array ( [id] => 72 [quantity] => 6 ) [3] => Array ( [id] => 4 [quantity] => 1 ) [4] => Array ( [id] => 5 [quantity] => 1 ) [5] => Array ( [id] => 7 [quantity] => 1 ) [6] => Array ( [id] => 8 [quantity] => 1 ) [7] => Array ( [id] => 9 [quantity] => 1 ) [8] => Array ( [id] => 19 [quantity] => 1 ) [9] => Array ( [id] => 20 [quantity] => 1 ) [10] => Array ( [id] => 22 [quantity] => 1 ) [11] => Array ( [id] => 29 [quantity] => 0 ) ) Next, I have a form that I am trying to populate. It loops through the item database, prints out all the possible items, and checks the ones that are already present in $items_pool. <?php foreach ($items['items_poolpackage']->result() as $item): ?> <input type="checkbox" name="measure[<?=$item->id?>][checkmark]" value="<?=$item->id?>"> <?php endforeach; ?> I know what logically I'm trying to accomplish here, but I can't figure out the programming. What I'm looking for, written loosely is something like this (not real code): <input type="checkbox" name="measure[<?=$item->id?>][checkmark]" value="<?=$item->id?>" <?php if ($items_pool['$item->id']) { echo "SELECTED"; } else { }?>> Specifically this conditional loop through the array, through all the key values (the ID) and if there's a match, the checkbox is selected. <?php if ($items_pool['$item->id']) { echo "SELECTED"; } else { }?> I understand from a loop structured like this that it may mean a lot of 'extra' processing. TL;DR - I need to echo within a loop if the item going through the loop exists within another array.

    Read the article

  • Register NSService with Command Alt NSKeyEquivalent

    - by mahal tertin
    my Application provides a Global Service. I'd like to install the service with a command-alt-key combination. the thing i do now is not very error prone and really hard to debug as don't really see what's happening: inside Info.plist: <key>NSServices</key> <array> <dict> <key>NSSendTypes</key> <array> <string></string> </array> <key>NSReturnTypes</key> <array> <string></string> </array> <key>NSMenuItem</key> <dict> <key>default</key> <string>Go To Window in ${PRODUCT_NAME}</string> </dict> <key>NSMessage</key> <string>bringZFToForegroundZoomOut</string> <key>NSPortName</key> <string>com.raskinformac.${PRODUCT_NAME:identifier}</string> </dict> </array> and in the code: CFStringRef serviceStatusName = (CFStringRef)[NSString stringWithFormat:@"%@ - %@ - %@", appIdentifier, appName, methodNameForService]; CFStringRef serviceStatusRoot = CFSTR("NSServicesStatus"); CFPropertyListRef pbsAllServices = (CFPropertyListRef) CFMakeCollectable ( CFPreferencesCopyAppValue(serviceStatusRoot, CFSTR("pbs")) ); // the user did not configure any custom services BOOL otherServicesDefined = pbsAllServices != NULL; BOOL ourServiceDefined = NO; if ( otherServicesDefined ) { ourServiceDefined = NULL != CFDictionaryGetValue((CFDictionaryRef)pbsAllServices, serviceStatusName); } NSUpdateDynamicServices(); NSMutableDictionary *pbsAllServicesNew = nil; if (otherServicesDefined) { pbsAllServicesNew = [NSMutableDictionary dictionaryWithDictionary:(NSDictionary*)pbsAllServices]; } else { pbsAllServicesNew = [NSMutableDictionary dictionaryWithCapacity:1]; } NSDictionary *serviceStatus = [NSDictionary dictionaryWithObjectsAndKeys: (id)kCFBooleanTrue, @"enabled_context_menu", (id)kCFBooleanTrue, @"enabled_services_menu", @"@~r", @"key_equivalent", nil]; [pbsAllServicesNew setObject:serviceStatus forKey:(NSString*)serviceStatusName]; CFPreferencesSetAppValue ( serviceStatusRoot, (CFPropertyListRef) pbsAllServicesNew, CFSTR("pbs")); Boolean result = CFPreferencesAppSynchronize(CFSTR("pbs")); if (result) { NSUpdateDynamicServices(); JLog(@"successfully installed our alt-command-R service"); } else { ALog(@"couldn't install our alt-command-R service"); } and to change the service: // once installed, its's a bit tricky to set new ones (works only in RELEASE somehow?) // quit finder // open "~/Library/Preferences/pbs.plist" and remove ch.ana.Zoom - Reveal Window in Zoom - bringZFToForegroundZoomOut inside NSServicesStatus and save // start app // /System/Library/CoreServices/pbs -dump_pboard (to see if it hat actually done what we wanted, might be empty) // /System/Library/CoreServices/pbs (to add the new services) // /System/Library/CoreServices/pbs -dump_pboard (see new linking) // and then /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder -NSDebugServices MY.APP.IDENTIFIER to restart finder so my question: is there a easier way to enable a Service with cmd-option-key? if yes, i'd gladly implement it in my software.

    Read the article

  • Exception thrown while accessing HTML DataTable in backing bean

    - by Denzil
    Hi folks, I am building a simple application in JSF with the CrUD functionality. I am trying to implement edit functionality using the tomahawk component .I am unable to retrieve the selected row in my backing bean. Here's my JSP file snip: <t:dataTable id="data" binding="#{selectOneRowList.dataTable}" styleClass="scrollerTable" headerClass="standardTable_Header" footerClass="standardTable_Header" rowClasses="standardTable_Row1,standardTable_Row2" columnClasses="standardTable_Column,standardTable_ColumnCentered,standardTable_Column" var="car" value="#{selectOneRowList.list}" sortColumn="#{selectOneRowList.sortColumn}" sortAscending="#{selectOneRowList.sortAscending}" preserveDataModel="false" preserveSort="true" preserveRowStates="true" rows="10" > <h:column> <f:facet name="header"> <h:outputText value="Select"/> </f:facet> <t:selectOneRow groupName="selection" id="hugo" value="#{selectOneRowList.selectedRowIndex}" onchange="submit();" immediate="true" valueChangeListener="#{selectOneRowList.processRowSelection}"/> </h:column> <h:column> <f:facet name="header"> </f:facet> <h:outputText value="#{car.id}" /> </h:column> <h:column> <f:facet name="header"> <h:outputText value="Cars" /> </f:facet> <h:outputText value="#{car.type}" /> </h:column> <t:column sortable="true"> <f:facet name="header"> <h:outputText value="Color" /> </f:facet> <h:outputText value="#{car.color}" /> </t:column> </t:dataTable> Here's my backing bean SelectOneRowList.java : public void editCar(ActionEvent event) { System.out.println("Row number ## " + _selectedRowIndex.toString() + " selected!"); System.out.println("Datatable ::"+ dataTable); System.out.println("Row Count ::" + dataTable.getRowCount()); dataItem = (SimpleCar) getDataTable().getRowData(); //selectedCar = (SimpleCar) _list.get(Integer.parseInt(_selectedRowIndex.toString())); selectedCar = (SimpleCar) dataTable.getRowData(); System.out.println(dataTable.getRowData()); } My DTO{Data Transfer Object} which is SimpleCar.java contains the variables ID, type, color and their respective setters/getters. The dataItem variable is of type "SimpleCar". The dataTable is of type HTMLDataTable. I am able to get the the first 3 SOP's but the 4th SOP isn't printed. I receive the following exception on the server : javax.faces.el.EvaluationException: Exception while invoking expression #{selectOneRowList.editCar} org.apache.myfaces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:156) javax.faces.component.UICommand.broadcast(UICommand.java:89) javax.faces.component.UIViewRoot._broadcastForPhase(UIViewRoot.java:97) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:171) org.apache.myfaces.lifecycle.InvokeApplicationExecutor.execute(InvokeApplicationExecutor.java:32) org.apache.myfaces.lifecycle.LifecycleImpl.executePhase(LifecycleImpl.java:95) org.apache.myfaces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:70) javax.faces.webapp.FacesServlet.service(FacesServlet.java:139) org.apache.myfaces.webapp.filter.ExtensionsFilter.doFilter(ExtensionsFilter.java:341) On click of the edit button the editCar method in my backing bean is invoked. I need to get the data of the selected row in my backing bean. Why is the exception occurring ? The above example is taken from the tomawhawk examples WAR distributed on the website. I have gone through many links including the ones on BalusC but none of them have helped. Regards,

    Read the article

  • Need help with memory leaks in RSS Reader

    - by Stilton
    I'm trying to write a simple RSS reader for the iPhone, and it appeared to be working fine, until I started working with Instruments, and discovered my App is leaking massive amounts of memory. I'm using the NSXMLParser class to parse an RSS feed. My memory leaks appear to be originating from the overridden delegate methods: - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string and - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName I'm also suspicious of the code that populates the cells from my parsed data, I've included the code from those methods and a few other key ones, any insights would be greatly appreciated. - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if ([self.currentElement isEqualToString:@"title"]) { [self.currentTitle appendString:string]; } else if ([self.currentElement isEqualToString:@"link"]) { [self.currentURL appendString:string]; } else if ([self.currentElement isEqualToString:@"description"]) { [self.currentSummary appendString:string]; } } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if ([elementName isEqualToString:@"item"]) { //asdf NSMutableDictionary *item = [[NSMutableDictionary alloc] init]; [item setObject:currentTitle forKey:@"title"]; [item setObject:currentURL forKey:@"URL"]; [item setObject:currentSummary forKey:@"summary"]; [self.currentTitle release]; [self.currentURL release]; [self.currentSummary release]; [self.stories addObject:item]; [item release]; } } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell. // Set up the cell int index = [indexPath indexAtPosition: [indexPath length] - 1]; CGRect contentRect = CGRectMake(8.0, 4.0, 260, 20); UILabel *textLabel = [[UILabel alloc] initWithFrame:contentRect]; if (self.currentLevel == 0) { textLabel.text = [self.categories objectAtIndex: index]; } else { textLabel.text = [[self.stories objectAtIndex: index] objectForKey:@"title"]; } textLabel.textColor = [UIColor blackColor]; textLabel.font = [UIFont boldSystemFontOfSize:14]; [[cell contentView] addSubview: textLabel]; //[cell setText:[[stories objectAtIndex: storyIndex] objectForKey: @"title"]]; [textLabel autorelease]; return cell; } - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict { if ([elementName isEqualToString:@"item"]) { self.currentTitle = [[NSMutableString alloc] init]; self.currentURL = [[NSMutableString alloc] init]; self.currentSummary = [[NSMutableString alloc] init]; } if (currentElement != nil) { [self.currentElement release]; } self.currentElement = [elementName copy]; } - (void)dealloc { [currentElement release]; [currentTitle release]; [currentURL release]; [currentSummary release]; [currentDate release]; [stories release]; [rssParser release]; [storyTable release]; [super dealloc]; } // Override to support row selection in the table view. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Navigation logic may go here -- for example, create and push another view controller. // AnotherViewController *anotherViewController = [[AnotherViewController alloc] initWithNibName:@"AnotherView" bundle:nil]; int index = [indexPath indexAtPosition: [indexPath length] - 1]; if (currentLevel == 1) { StoryViewController *storyViewController = [[StoryViewController alloc] initWithURL:[[stories objectAtIndex: index] objectForKey:@"URL"] nibName:@"StoryViewController" bundle:nil]; [self.navigationController pushViewController:storyViewController animated:YES]; [storyViewController release]; } else { RootViewController *rvController = [[RootViewController alloc] initWithNibName:@"RootViewController" bundle:nil]; rvController.currentLevel = currentLevel + 1; rvController.rssIndex = index; [self.navigationController pushViewController:rvController animated:YES]; [rvController release]; } }

    Read the article

  • InputText inside Primefaces dataTable is not refreshed

    - by robson
    I need to have inputTexts inside datatable when form is in editable mode. Everything works properly except form cleaning with immediate="true" (without form validation). Then primefaces datatable behaves unpredictable. After filling in datatable with new data - it still stores old values. Short example: test.xhtml <h:body> <h:form id="form"> <p:dataTable var="v" value="#{test.list}" id="testTable"> <p:column headerText="Test value"> <p:inputText value="#{v}"/> </p:column> </p:dataTable> <h:dataTable var="v" value="#{test.list}" id="testTable1"> <h:column> <f:facet name="header"> <h:outputText value="Test value" /> </f:facet> <p:inputText value="#{v}" /> </h:column> </h:dataTable> <p:dataTable var="v" value="#{test.list}" id="testTable2"> <p:column headerText="Test value"> <h:outputText value="#{v}" /> </p:column> </p:dataTable> <p:commandButton value="Clear" actionListener="#{test.clear()}" immediate="true" update=":form:testTable :form:testTable1 :form:testTable2"/> <p:commandButton value="Update" actionListener="#{test.update()}" update=":form:testTable :form:testTable1 :form:testTable2"/> </h:form> </h:body> And java: import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.annotation.PostConstruct; import javax.faces.bean.ViewScoped; import javax.inject.Inject; import javax.inject.Named; @Named @ViewScoped public class Test implements Serializable { private static final long serialVersionUID = 1L; private List<String> list; @PostConstruct private void init(){ update(); } public List<String> getList() { return list; } public void setList(List<String> list) { this.list = list; } public void clear() { list = new ArrayList<String>(); } public void update() { list = new ArrayList<String>(); list.add("Item 1"); list.add("Item 2"); } } In the example above I have 3 configurations: 1. p:dataTable with p:inputText 2. h:dataTable with p:inputText 3. p:dataTable with h:outputText And 2 buttons: first clears data, second applies data Workflow: 1. Try to change data in inputTexts in p:dataTable and h:dataTable 2. Clear data of list (arrayList of string) - click "clear" button (imagine that you click cancel on form because you don't want to store data to database) 3. Load new data - click "update" button (imagine that you are openning new form with new data) Question: Why p:dataTable with p:inputText still stores manually changed data, not the loaded ones? Is there any way to force p:dataTable to behaving like h:dataTable in this case?

    Read the article

  • filling in the holes in the result of a query

    - by ????? ????????
    my query is returning: +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Bla | +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 13 | | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 14 | | 0 | 0 | 0 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 8 | 37 | 29 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 374 | 30 | | 0 | 0 | 1 | 0 | 78 | 2 | 4 | 8 | 57 | 169 | 116 | 602 | 31 | | 156 | 255 | 79 | 75 | 684 | 325 | 289 | 194 | 407 | 171 | 584 | 443 | 32 | | 1561 | 2852 | 2056 | 796 | 2004 | 1755 | 879 | 1052 | 1490 | 1683 | 2532 | 2381 | 33 | | 4167 | 3841 | 4798 | 3399 | 4132 | 5849 | 3157 | 4381 | 4424 | 4487 | 4178 | 5343 | 34 | | 5472 | 5939 | 5768 | 4150 | 7483 | 6836 | 6346 | 6288 | 6850 | 7155 | 5706 | 5231 | 35 | | 5749 | 4741 | 5264 | 4045 | 6544 | 7405 | 7524 | 6625 | 6344 | 5508 | 6513 | 3854 | 36 | | 5464 | 6323 | 7074 | 4861 | 7244 | 6768 | 6632 | 7389 | 8077 | 8745 | 6738 | 5039 | 37 | | 5731 | 7205 | 7476 | 5734 | 9103 | 9244 | 7339 | 8970 | 9726 | 9089 | 6328 | 5512 | 38 | | 7262 | 6149 | 8231 | 6654 | 9886 | 9834 | 9306 | 10065 | 9983 | 9984 | 6738 | 5806 | 39 | | 5886 | 6934 | 7137 | 6978 | 9034 | 9155 | 7389 | 9437 | 9711 | 8665 | 6593 | 5337 | 40 | +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ as you can see the BLA column starts from 13. i want it to start from 1, then 2, then 3 etc......I do not want any gaps in the data. The reason there are gaps is because all of the months are 0 for that specific bla how do i get the result set to include ALL values for BLA, even ones that will yield 0 for the months? here are the desired results: +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Bla | +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 | | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 14 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 | | … | … | … | … | … | … | … | … | … | … | … | … | … | +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ here's my query: WITH CTE AS ( select sum(case when datepart(month,[datetime entered]) = 1 then 1 end) as Jan, sum(case when datepart(month,[datetime entered]) = 2 then 1 end) as Feb, sum(case when datepart(month,[datetime entered]) = 3 then 1 end) as Mar, sum(case when datepart(month,[datetime entered]) = 4 then 1 end) as Apr, sum(case when datepart(month,[datetime entered]) = 5 then 1 end) as May, sum(case when datepart(month,[datetime entered]) = 6 then 1 end) as Jun, sum(case when datepart(month,[datetime entered]) = 7 then 1 end) as Jul, sum(case when datepart(month,[datetime entered]) = 8 then 1 end) as Aug, sum(case when datepart(month,[datetime entered]) = 9 then 1 end) as Sep, sum(case when datepart(month,[datetime entered]) = 10 then 1 end) as Oct, sum(case when datepart(month,[datetime entered]) = 11 then 1 end) as Nov, sum(case when datepart(month,[datetime entered]) = 12 then 1 end) as Dec, DATEPART(yyyy,[datetime entered]) as [Year], bla= CASE WHEN datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108) >191 THEN 192 ELSE datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108) END --,datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE)) AS Sort_Days, --DATEPART(hour, [datetime completed] ) AS Sort_Hours from [TurnAround] group by datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108), DATEPART(yyyy,[datetime entered]) , [datetime entered] --[DateTime Completed] ) SELECT ISNULL(SUM(Jan),0) Jan, ISNULL(SUM(Feb),0) Feb, ISNULL(SUM(Mar),0) Mar, ISNULL(SUM(Apr),0) Apr, ISNULL(SUM(May),0) May, ISNULL(SUM(Jun),0) Jun, ISNULL(SUM(Jul),0) Jul, ISNULL(SUM(Aug),0) Aug, ISNULL(SUM(Sep),0) Sep, ISNULL(SUM(Oct),0) Oct, ISNULL(SUM(Nov),0) Nov, ISNULL(SUM(Dec),0) Dec, [year], --,Sort_Hours, --Sort_Days, A.RN Bla FROM ( SELECT *, RN=ROW_NUMBER() OVER(ORDER BY object_id) FROM sys.all_objects) A LEFT JOIN CTE B ON A.RN = CASE WHEN B.Bla > 191 THEN 192 ELSE B.Bla END WHERE A.RN BETWEEN 1 AND 192 GROUP BY A.RN,[year]

    Read the article

  • Win32 -- Object cleanup and global variables

    - by KaiserJohaan
    Hello, I've got a question about global variables and object cleanup in c++. For example, look at the code here; case WM_PAINT: paintText(&hWnd); break; void paintText(HWND* hWnd) { PAINTSTRUCT ps; HBRUSH hbruzh = CreateSolidBrush(RGB(0,0,0)); HDC hdz = BeginPaint(*hWnd,&ps); char s1[] = "Name"; char s2[] = "IP"; SelectBrush(hdz,hbruzh); SelectFont(hdz,hFont); SetBkMode(hdz,TRANSPARENT); TextOut(hdz,3,23,s1,sizeof(s1)); TextOut(hdz,10,53,s2,sizeof(s2)); EndPaint(*hWnd,&ps); DeleteObject(hdz); DeleteObject(hbruzh); // bad? DeleteObject(ps); // bad? } 1)First of all; which objects are good to delete and which ones are NOT good to delete and why? Not 100% sure of this. 2)Since WM_PAINT is called everytime the window is redrawn, would it be better to simply store ps, hdz and hbruzh as global variables instead of re-initializing them everytime? The downside I guess would be tons of global variables in the end _ but performance-wise would it not be less CPU-consuming? I know it won't matter prolly but I'm just aiming for minimalistic as possible for educational purposes. 3) What about libraries that are loaded in? For example: // // Main // int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // initialize vars HWND hWnd; WNDCLASSEX wc; HINSTANCE hlib = LoadLibrary("Riched20.dll"); ThishInstance = hInstance; ZeroMemory(&wc,sizeof(wc)); // set WNDCLASSEX props wc.cbSize = sizeof(WNDCLASSEX); wc.lpfnWndProc = WindowProc; wc.hInstance = ThishInstance; wc.hIcon = LoadIcon(hInstance,MAKEINTRESOURCE(IDI_MYICON)); wc.lpszMenuName = MAKEINTRESOURCE(IDR_MENU1); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)COLOR_WINDOW; wc.lpszClassName = TEXT("PimpClient"); RegisterClassEx(&wc); // create main window and display it hWnd = CreateWindowEx(NULL, wc.lpszClassName, TEXT("PimpClient"), 0, 300, 200, 450, 395, NULL, NULL, hInstance, NULL); createWindows(&hWnd); ShowWindow(hWnd,nCmdShow); // loop message queue MSG msg; while (GetMessage(&msg, NULL,0,0)) { TranslateMessage(&msg); DispatchMessage(&msg); } // cleanup? FreeLibrary(hlib); return msg.wParam; } 3cont) is there a reason to FreeLibrary at the end? I mean when the process terminates all resources are freed anyway? And since the library is used to paint text throughout the program, why would I want to free before that? Cheers

    Read the article

  • What does the `forall` keyword in Haskell/GHC do?

    - by JUST MY correct OPINION
    I've been banging my head on this one for (quite literally) years now. I'm beginning to kinda/sorta understand how the foreach keyword is used in so-called "existential types" like this: data ShowBox = forall s. Show s => SB s (This despite the confusingly-worded explanations of it in the fragments found all around the web.) This is only a subset, however, of how foreach is used and I simply cannot wrap my mind around its use in things like this: runST :: forall a. (forall s. ST s a) -> a Or explaining why these are different: foo :: (forall a. a -> a) -> (Char,Bool) bar :: forall a. ((a -> a) -> (Char, Bool)) Or the whole RankNTypes stuff that breaks my brain when "explained" in a way that makes me want to do that Samuel L. Jackson thing from Pulp Fiction. (Don't follow that link if you're easily offended by strong language.) The problem, really, is that I'm a dullard. I can't fathom the chicken scratchings (some call them "formulae") of the elite mathematicians that created this language seeing as my university years are over two decades behind me and I never actually had to put what I learnt into use in practice. I also tend to prefer clear, jargon-free English rather than the kinds of language which are normal in academic environments. Most of the explanations I attempt to read on this (the ones I can find through search engines) have these problems: They're incomplete. They explain one part of the use of this keyword (like "existential types") which makes me feel happy until I read code that uses it in a completely different way (like runST, foo and bar above). They're densely packed with assumptions that I've read the latest in whatever branch of discrete math, category theory or abstract algebra is popular this week. (If I never read the words "consult the paper whatever for details of implementation" again, it will be too soon.) They're written in ways that frequently turn even simple concepts into tortuously twisted and fractured grammar and semantics. (I suspect that the last two items are the biggest problem. I wouldn't know, though, since I'm too much a dullard to comprehend them.) It's been asked why Haskell never really caught on in industry. I suspect, in my own humble, unintelligent way, that my experience in figuring out one stupid little keyword -- a keyword that is increasingly ubiquitous in the libraries being written these days -- are also part of the answer to that question. It's hard for a language to catch on when even its individual keywords cause years-long quests to comprehend. Years-long quests which end in failure. So... On to the actual question. Can anybody completely explain the foreach keyword in clear, plain English (or, if it exists somewhere, point to such a clear explanation which I've missed) that doesn't assume I'm a mathematician steeped in the jargon?

    Read the article

  • parallel computation for an Iterator of elements in Java

    - by Brian Harris
    I've had the same need a few times now and wanted to get other thoughts on the right way to structure a solution. The need is to perform some operation on many elements on many threads without needing to have all elements in memory at once, just the ones under computation. As in, Iterables.partition is insufficient because it brings all elements into memory up front. Expressing it in code, I want to write a BulkCalc2 that does the same thing as BulkCalc1, just in parallel. Below is sample code that illustrates my best attempt. I'm not satisfied because it's big and ugly, but it does seem to accomplish my goals of keeping threads highly utilized until the work is done, propagating any exceptions during computation, and not having more than numThreads instances of BigThing necessarily in memory at once. I'll accept the answer which meets the stated goals in the most concise way, whether it's a way to improve my BulkCalc2 or a completely different solution. interface BigThing { int getId(); String getString(); } class Calc { // somewhat expensive computation double calc(BigThing bigThing) { Random r = new Random(bigThing.getString().hashCode()); double d = 0; for (int i = 0; i < 100000; i++) { d += r.nextDouble(); } return d; } } class BulkCalc1 { final Calc calc; public BulkCalc1(Calc calc) { this.calc = calc; } public TreeMap<Integer, Double> calc(Iterator<BigThing> in) { TreeMap<Integer, Double> results = Maps.newTreeMap(); while (in.hasNext()) { BigThing o = in.next(); results.put(o.getId(), calc.calc(o)); } return results; } } class SafeIterator<T> { final Iterator<T> in; SafeIterator(Iterator<T> in) { this.in = in; } synchronized T nextOrNull() { if (in.hasNext()) { return in.next(); } return null; } } class BulkCalc2 { final Calc calc; final int numThreads; public BulkCalc2(Calc calc, int numThreads) { this.calc = calc; this.numThreads = numThreads; } public TreeMap<Integer, Double> calc(Iterator<BigThing> in) { ExecutorService e = Executors.newFixedThreadPool(numThreads); List<Future<?>> futures = Lists.newLinkedList(); final Map<Integer, Double> results = new MapMaker().concurrencyLevel(numThreads).makeMap(); final SafeIterator<BigThing> it = new SafeIterator<BigThing>(in); for (int i = 0; i < numThreads; i++) { futures.add(e.submit(new Runnable() { @Override public void run() { while (true) { BigThing o = it.nextOrNull(); if (o == null) { return; } results.put(o.getId(), calc.calc(o)); } } })); } e.shutdown(); for (Future<?> future : futures) { try { future.get(); } catch (InterruptedException ex) { // swallowing is OK } catch (ExecutionException ex) { throw Throwables.propagate(ex.getCause()); } } return new TreeMap<Integer, Double>(results); } }

    Read the article

  • Bridging LXC containers to host eth0 so they can have a public IP

    - by Vianney Stroebel
    UPDATE: I found the solution there: http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge#No_traffic_gets_trough_.28except_ARP_and_STP.29 # cd /proc/sys/net/bridge # ls bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-call-ip6tables bridge-nf-filter-vlan-tagged # for f in bridge-nf-*; do echo 0 $f; done But I'd like to have expert opinions on this: is it safe to disable all bridge-nf-*? What are they here for? END OF UPDATE I need to bridge LXC containers to the physical interface (eth0) of my host, reading numerous tutorials, documents and blog posts on the subject. I need the containers to have their own public IP (which I've previously done KVM/libvirt). After two days of searching and trying, I still can't make it work with LXC containers. The host runs a freshly installed Ubuntu Server Quantal (12.10) with only libvirt (which I'm not using here) and lxc installed. I created the containers with : lxc-create -t ubuntu -n mycontainer So they also run Ubuntu 12.10. Content of /var/lib/lxc/mycontainer/config is: lxc.utsname = mycontainer lxc.mount = /var/lib/lxc/test/fstab lxc.rootfs = /var/lib/lxc/test/rootfs lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.veth.pair = vethmycontainer lxc.network.ipv4 = 179.43.46.233 lxc.network.hwaddr= 02:00:00:86:5b:11 lxc.devttydir = lxc lxc.tty = 4 lxc.pts = 1024 lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin mac_override lxc.pivotdir = lxc_putold # uncomment the next line to run the container unconfined: #lxc.aa_profile = unconfined lxc.cgroup.devices.deny = a # Allow any mknod (but not using the node) lxc.cgroup.devices.allow = c *:* m lxc.cgroup.devices.allow = b *:* m # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm #lxc.cgroup.devices.allow = c 4:0 rwm #lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rwm #fuse lxc.cgroup.devices.allow = c 10:229 rwm #tun lxc.cgroup.devices.allow = c 10:200 rwm #full lxc.cgroup.devices.allow = c 1:7 rwm #hpet lxc.cgroup.devices.allow = c 10:228 rwm #kvm lxc.cgroup.devices.allow = c 10:232 rwm Then I changed my host /etc/network/interfaces to: auto lo iface lo inet loopback auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 92.281.86.226 netmask 255.255.255.0 network 92.281.86.0 broadcast 92.281.86.255 gateway 92.281.86.254 dns-nameservers 213.186.33.99 dns-search ovh.net When I try command line configuration ("brctl addif", "ifconfig eth0", etc.) my remote host becomes inaccessible and I have to hard reboot it. I changed the content of /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces to: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 179.43.46.233 netmask 255.255.255.255 broadcast 178.33.40.233 gateway 92.281.86.254 It takes several minutes for mycontainer to start (lxc-start -n mycontainer). I tried replacing gateway 92.281.86.254 by : post-up route add 92.281.86.254 dev eth0 post-up route add default gw 92.281.86.254 post-down route del 92.281.86.254 dev eth0 post-down route del default gw 92.281.86.254 My container then starts instantly. But whatever configuration I set in /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces, I cannot ping from mycontainer to any IP (including the host's) : ubuntu@mycontainer:~$ ping 92.281.86.226 PING 92.281.86.226 (92.281.86.226) 56(84) bytes of data. ^C --- 92.281.86.226 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5031ms And my host cannot ping the container: root@host:~# ping 179.43.46.233 PING 179.43.46.233 (179.43.46.233) 56(84) bytes of data. ^C --- 179.43.46.233 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4000ms My container's ifconfig: ubuntu@mycontainer:~$ ifconfig eth0 Link encap:Ethernet HWaddr 02:00:00:86:5b:11 inet addr:179.43.46.233 Bcast:255.255.255.255 Mask:0.0.0.0 inet6 addr: fe80::ff:fe79:5a31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:6 overruns:0 frame:0 TX packets:54 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4070 (4.0 KB) TX bytes:4168 (4.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2496 (2.4 KB) TX bytes:2496 (2.4 KB) My host's ifconfig: root@host:~# ifconfig br0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b inet addr:92.281.86.226 Bcast:91.121.67.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe43:652b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1453 errors:0 dropped:18 overruns:0 frame:0 TX packets:1630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:145125 (145.1 KB) TX bytes:299943 (299.9 KB) eth0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3178 errors:0 dropped:0 overruns:0 frame:0 TX packets:1637 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:298263 (298.2 KB) TX bytes:309167 (309.1 KB) Interrupt:20 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:300 (300.0 B) TX bytes:300 (300.0 B) vethtest Link encap:Ethernet HWaddr fe:0d:7f:3e:70:88 inet6 addr: fe80::fc0d:7fff:fe3e:7088/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4168 (4.1 KB) TX bytes:4250 (4.2 KB) virbr0 Link encap:Ethernet HWaddr de:49:c5:66:cf:84 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I have disabled lxcbr0 (USE_LXC_BRIDGE="false" in /etc/default/lxc). root@host:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.4c72b943652b no eth0 vethtest I have configured the IP 179.43.46.233 to point to 02:00:00:86:5b:11 in my hosting provider (OVH) config panel. (The IPs in this post are not the real ones.) Thanks for reading this long question! :-) Vianney

    Read the article

  • WPF BackgroundWorker Execution

    - by Sanju
    Hi All, I've been programming C# for a while now (I'm a Computer Science major), and have never had to implement threading. I am currently building an application for a client that requires threading for a series of operations. Due to the nature of the client/provider agreement, I cannot release the source I am working with. The code that I have posted is in a general form. Pretty much the basic idea of what I am implementing, excluding the content specific source. The first Snippet demonstrates my basic structure, The Progress class is a custom progress window that is displayed as a dialog to prevent user UI interaction. I am currently using the code to complete database calls based on a collection of objects in another area of the application code. For example, I have a collection of "Objects" of one of my custom classes, that I perform these database calls on or on behalf of. My current set up works just fine when I call the "GeneralizedFunction" one and only one time. What I need to do is call this function once for every object in the collection. I was attempting to use a foreach loop to iterate through the collection, then I tried a for loop. For both loops, the result was the same. The Async operation performs exactly as desired for the first item in the collection. This, however, is the only one it works for. For each subsequent item, the progress box (my custom window) displays and immediately closes. In terms of the database calls, the calls for the first item in the collection are the only ones that successfully complete. All others aren't attempted. I've tried everything that I know and don't know to do, but I just cannot seem to get it to work. How can I get this to work for my entire collection? Any and all help is very greatly appreciated. Progress progress; BackgroundWorker worker; // Cancel the current process private void CancelProcess(object sender, EventArgs e) { worker.CancelAsync(); } // The main process ... "Generalized" private void GeneralizedFunction() { progress = new Progress(); progress.Cancel += CancelProcess; progress.Owner = this; Dispatcher pDispatcher = progress.Dispatcher; worker = new BackgroundWorker(); worker.WorkerSupportsCancellation = true; object[] workArgs = { arg1, arg2, arg3}; worker.DoWork += delegate(object s, DoWorkEventArgs args) { /* All main logic here */ foreach(Class ClassObject in ClassObjectCollection) { //Some more logic here UpdateProgressDelegate update = new UpdateProgressDelegate(updateProgress); pDispatcher.BeginInvoke(update, arg1,arg2,arg3); Thread.Sleep(1000); } }; worker.RunWorkerCompleted += delegate(object s, RunWorkerCompletedEventArgs args) { progress.Close(); }; worker.RunWorkerAsync(workArgs); progress.ShowDialog(); } public delegate void UpdateProgressDelegate(arg1,arg2,arg3); private void updateProgress(arg1,arg2,arg3) { //Update progress }

    Read the article

  • numpy calling sse2 via ctypes

    - by Daniel
    Hello, In brief, I am trying to call into a shared library from python, more specifically, from numpy. The shared library is implemented in C using sse2 instructions. Enabling optimisation, i.e. building the library with -O2 or –O1, I am facing strange segfaults when calling into the shared library via ctypes. Disabling optimisation (-O0), everything works out as expected, as is the case when linking the library to a c-program directly (optimised or not). Attached you find a snipped which exhibits the delineated behaviour on my system. With optimisation enabled, gdb reports a segfault in __builtin_ia32_loadupd (__P) at emmintrin.h:113. The value of __P is reported as optimised out. test.c: #include <emmintrin.h> #include <complex.h> void test(const int m, const double* x, double complex* y) { int i; __m128d _f, _x, _b; double complex f __attribute__( (aligned(16)) ); double complex b __attribute__( (aligned(16)) ); __m128d* _p; b = 1; _b = _mm_loadu_pd( (double *) &b ); _p = (__m128d*) y; for(i=0; i<m; ++i) { f = cexp(-I*x[i]); _f = _mm_loadu_pd( (double *) &f ); _x = _mm_loadu_pd( (double *) &x[i] ); _f = _mm_shuffle_pd(_f, _f, 1); *_p = _mm_add_pd(*_p, _f); *_p = _mm_add_pd(*_p, _x); *_p = _mm_mul_pd(*_p,_b); _p++; } return; } Compiler flags: gcc -o libtest.so -shared -std=c99 -msse2 -fPIC -O2 -g -lm test.c test.py: import numpy as np import os def zerovec_aligned(nr, dtype=np.float64, boundary=16): '''Create an aligned array of zeros. ''' size = nr * np.dtype(dtype).itemsize tmp = np.zeros(size + boundary, dtype=np.uint8) address = tmp.__array_interface__['data'][0] offset = boundary - address % boundary return tmp[offset:offset + size].view(dtype=dtype) lib = np.ctypeslib.load_library('libtest', '.' ) lib.test.restype = None lib.test.argtypes = [np.ctypeslib.ctypes.c_int, np.ctypeslib.ndpointer(np.float64, flags=('C', 'A') ), np.ctypeslib.ndpointer(np.complex128, flags=('C', 'A', 'W') )] n = 13 y = zerovec_aligned(n, dtype=np.complex128) x = np.ones(n, dtype=np.float64) # x = zerovec_aligned(n, dtype=np.float64) # x[:] = 1. lib.test(n,x,y) My system: Ubuntu Linux i686 2.6.31-22-generic Compiler: gcc (Ubuntu 4.4.1-4ubuntu9) Python: Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) [GCC 4.4.1] Numpy: 1.4.0 I have taken provisions (cf. python code) that y is aligned and the alignment of x should not matter (I think; explicitly aligning x does not solve the problem though). Note also that i use _mm_loadu_pd instead of _mm_load_pd when loading b and f. For the C-only version _mm_load_pd works (as expected). However, when calling the function via ctypes using _mm_load_pd always segfaults (independent of optimisation). I have tried several days to sort out this issue without success ... and I am on the verge beating my monitor to death. Any input welcome. Daniel

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • How to find source of 301/302 redirect loop? Heroku GoDaddy Zerigo

    - by user179288
    this should be a relatively simple problem but I'm having trouble.I hope this is the right forum to post on as I've seen people get booted off stack-overflow for this sort of thing. I've setup a web app on heroku (cedar stack) at my-web-app.herokuapp.com and I'm trying to direct my-domain.com and www.my-domain.com to it. As per instructions on the heroku documentation, I've set my-domain.com to redirect (forwarding) to www.my-domain.com and then set a C-Name from www.my-domain.com to my-web-app.herokuapp.com. But the C-Name doesn't seem to be working right and is sending back to my-domain.com, causing a loop and I can't work out why. I first configured these setting at GoDaddy.com where I registered the domain but then tried to avoid the problem by using Heroku's Zerigo DNS add-on, setting the nameservers on GoDaddy to the ones given for Zerigo. However the problem remains. Here is the output from dig for my-domain.com ("drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 671 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 5 ;; QUESTION SECTION: ;drop-circles.com. IN ANY ;; ANSWER SECTION: drop-circles.com. 433 IN NS b.ns.zerigo.net. drop-circles.com. 433 IN NS d.ns.zerigo.net. drop-circles.com. 433 IN NS e.ns.zerigo.net. drop-circles.com. 433 IN NS a.ns.zerigo.net. drop-circles.com. 433 IN NS c.ns.zerigo.net. drop-circles.com. 433 IN SOA a.ns.zerigo.net. hostmaster.zerigo.com. 1372250760 10800 3600 604800 900 drop-circles.com. 433 IN A 64.27.57.29 drop-circles.com. 433 IN A 64.27.57.24 ;; ADDITIONAL SECTION: d.ns.zerigo.net. 68935 IN A 174.36.24.250 e.ns.zerigo.net. 69015 IN A 72.26.219.150 a.ns.zerigo.net. 72602 IN A 64.27.57.11 c.ns.zerigo.net. 69204 IN A 109.74.192.232 b.ns.zerigo.net. 70549 IN A 174.37.229.229 ;; Query time: 15 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:07 2013 ;; MSG SIZE rcvd: 293 Here is the output from dig for www.my-domain.com ("www.drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any www.drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1608 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.drop-circles.com. IN ANY ;; ANSWER SECTION: www.drop-circles.com. 407 IN CNAME drop-circles-website.herokuapp.com. ;; Query time: 19 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:15 2013 ;; MSG SIZE rcvd: 83 And from Fiddler if I use the inspector when I try either address I get a series of requests, with the my-domain.com ("drop-circles.com") looking like this: Request: GET http://drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: drop-circles.com Response: HTTP/1.1 302 Found Server: nginx/0.8.54 Date: Wed, 26 Jun 2013 13:26:55 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Status: 302 Found Location: http://www.drop-circles.com/ Content-Length: 113 <html><body>Redirecting to <a href="http://www.drop-circles.com/">http://www.drop-circles.com/</a></body></html> And the www.my-domain.com ("www.drop-circles.com") looking like this: Request: GET http://www.drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.drop-circles.com Response: HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Wed, 26 Jun 2013 13:26:56 GMT Location: http://drop-circles.com/ Vary: Accept X-Powered-By: Express Content-Length: 104 Connection: keep-alive <p>Moved Permanently. Redirecting to <a href="http://drop-circles.com/">http://drop-circles.com/</a></p> Any and all help would be greatly appreciated. If it is not at all obvious from these readouts what it might be could someone at least tell me which company GoDaddy, Zerigo or Heroku should I go to for support since I don't really know enough to be able to say where the problem lies. Thank you.

    Read the article

  • C++: casting to void* and back

    - by MInner
    * ---Edit - now the whole sourse* When I debug it on the end, "get" and "value" have different values! Probably, I convert to void* and back to User the wrong way? #include <db_cxx.h> #include <stdio.h> struct User{ User(){} int name; int town; User(int a){}; inline int get_index(int a){ return town; } //for another stuff }; int main(){ try { DbEnv* env = new DbEnv(NULL); env->open("./", DB_CREATE | DB_INIT_MPOOL | DB_THREAD | DB_INIT_LOCK | DB_INIT_TXN | DB_RECOVER | DB_INIT_LOG, 0); Db* datab = new Db(env, 0); datab->open(NULL, "db.dbf", NULL, DB_BTREE, DB_CREATE | DB_AUTO_COMMIT, 0); Dbt key, value, get; char a[10] = "bbaaccd"; User u; u.name = 1; u.town = 34; key.set_data(a); key.set_size(strlen(a) + 1 ); value.set_data((void*)&u); value.set_size(sizeof(u)); get.set_flags(DB_DBT_MALLOC); DbTxn* txn; env->txn_begin(NULL, &txn, 0); datab->put(txn, &key, &value, 0); datab->get(txn, &key, &get, 0); txn->commit(0); User g; g = *((User*)&get); printf("%d", g.town); getchar(); return 0; }catch (DbException &e){ printf("%s", e.what()); getchar(); } solution create a kind of "serializator" what would convert all POD's into void* and then will unite these pieces PS Or I'd rewrite User into POD type and everything will be all right, I hope. Add It's strange, but... I cast a defenetly non-pod object to void* and back (it has std::string inside) and it's all right (without sending it to the db and back). How could it be? And after I cast and send 'trough' db defenetly pod object (no extra methods, all members are pod, it's a simple struct {int a; int b; ...}) I get back dirted one. What's wrong with my approach? Add about week after first 'add' Damn... I've compiled it ones, just for have a look at which kind of dirt it returnes, and oh! it's okay!... I can't ! ... AAh!.. Lord... A reasonable question (in 99.999 percent of situations right answer is 'my', but... here...) - whos is this fault? My or VSs?

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by skriatok
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start Active ruleset: bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 38 packets, 2228 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 844 542K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1158 111K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Active ruleset: (after editing iptables into below sugested form) bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 2567 packets, 172K bytes) pkts bytes target prot opt in out source destination 49 4157 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 412K 441M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2567 172K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 DROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 8 Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 312K packets, 25M bytes) pkts bytes target prot opt in out source destination ping and syslog simultaneous screenshots from phone (pinger) and from laptop (being pinged) http://dl.dropbox.com/u/4160051/slckwr/pingfrom%20mobile.jpg http://dl.dropbox.com/u/4160051/slckwr/tailsyslog.jpg

    Read the article

  • Change default markers for directions on google maps

    - by Elaine Marley
    I'm a complete noob with google maps api and I started with a given script that I'm editing to what I need to do. In this case I have a map with some points in it that come from a database. They are like this (after I get the lat/lng from the database): var route1 = 'from: 37.496764,-5.913379 to: 37.392587,-6.00023'; var route2 = 'from: 37.392587,-6.00023 to: 37.376964,-5.990838'; routes = [route1, route2]; Then my script does the following: for(var j = 0; j < routes.length; j++) { callGDirections(j); document.getElementById("dbg").innerHTML += "called "+j+"<br>"; } And then the directions: function callGDirections(num) { directionsArray[num] = new GDirections(); GEvent.addListener(directionsArray[num], "load", function() { document.getElementById("dbg").innerHTML += "loaded "+num+"<br>"; var polyline = directionsArray[num].getPolyline(); polyline.setStrokeStyle({color:colors[num],weight:3,opacity: 0.7}); map.addOverlay(polyline); bounds.extend(polyline.getBounds().getSouthWest()); bounds.extend(polyline.getBounds().getNorthEast()); map.setCenter(bounds.getCenter(),map.getBoundsZoomLevel(bounds)); }); // === catch Directions errors === GEvent.addListener(directionsArray[num], "error", function() { var code = directionsArray[num].getStatus().code; var reason="Code "+code; if (reasons[code]) { reason = reasons[code] } alert("Failed to obtain directions, "+reason); }); directionsArray[num].load(routes[num], {getPolyline:true}); } The thing is, I want to change the A and B markers that I get from google on the map to the ones for each of the points that I'm using (each has it's particular icon in the database) but I don't know how to do this. Furthermore, what would be fantastic but I'm clueless if it's even possible is the following: when I get the directions I get something like this: (a) Street A directions (b) Street B And I want (a) Name of first point directions (b) Name of second point (also from database) I understand that my knowledge of the subject is very lacking and the question might be a bit vague, but I would appreciate any tip pointing me in the right direction. EDIT: Ok, I learned a lot from the google api with this problem but I'm still far from what I need. I learned how to hide the default markers doing this: // Hide the route markers when signaled. GEvent.addListener(directionsArray[num], "addoverlay", hideDirMarkers); // Not using the directions markers so hide them. function hideDirMarkers(){ var numMarkers = directionsArray[num].getNumGeocodes() for (var i = 0; i < numMarkers; i++) { var marker = directionsArray[num].getMarker(i); if (marker != null) marker.hide(); else alert("Marker is null"); } } And now when I create new markers doing this: var point = new GLatLng(lat,lng); var marker = createMarker(point,html); map.addOverlay(marker); They appear but they are not clickable (the popup with the html won't show)

    Read the article

  • Problem calling method inside same method in php?

    - by Fero
    Hi all, Will any one please tell me how to run this class. I am getting the FATAL ERROR: Fatal error: Call to undefined function readnumber() in E:\Program Files\xampp\htdocs\numberToWords\numberToWords.php on line 20 while giving input as 120 <?php class Test { function readnumber($num, $depth) { $num = (int)$num; $retval =""; if ($num < 0) // if it's any other negative, just flip it and call again return "negative " + readnumber(-$num, 0); if ($num > 99) // 100 and above { if ($num > 999) // 1000 and higher $retval .= readnumber($num/1000, $depth+3); $num %= 1000; // now we just need the last three digits if ($num > 99) // as long as the first digit is not zero $retval .= readnumber($num/100, 2)." hundred\n"; $retval .=readnumber($num%100, 1); // our last two digits } else // from 0 to 99 { $mod = floor($num / 10); if ($mod == 0) // ones place { if ($num == 1) $retval.="one"; else if ($num == 2) $retval.="two"; else if ($num == 3) $retval.="three"; else if ($num == 4) $retval.="four"; else if ($num == 5) $retval.="five"; else if ($num == 6) $retval.="six"; else if ($num == 7) $retval.="seven"; else if ($num == 8) $retval.="eight"; else if ($num == 9) $retval.="nine"; } else if ($mod == 1) // if there's a one in the ten's place { if ($num == 10) $retval.="ten"; else if ($num == 11) $retval.="eleven"; else if ($num == 12) $retval.="twelve"; else if ($num == 13) $retval.="thirteen"; else if ($num == 14) $retval.="fourteen"; else if ($num == 15) $retval.="fifteen"; else if ($num == 16) $retval.="sixteen"; else if ($num == 17) $retval.="seventeen"; else if ($num == 18) $retval.="eighteen"; else if ($num == 19) $retval.="nineteen"; } else // if there's a different number in the ten's place { if ($mod == 2) $retval.="twenty "; else if ($mod == 3) $retval.="thirty "; else if ($mod == 4) $retval.="forty "; else if ($mod == 5) $retval.="fifty "; else if ($mod == 6) $retval.="sixty "; else if ($mod == 7) $retval.="seventy "; else if ($mod == 8) $retval.="eighty "; else if ($mod == 9) $retval.="ninety "; if (($num % 10) != 0) { $retval = rtrim($retval); //get rid of space at end $retval .= "-"; } $retval.=readnumber($num % 10, 0); } } if ($num != 0) { if ($depth == 3) $retval.=" thousand\n"; else if ($depth == 6) $retval.=" million\n"; if ($depth == 9) $retval.=" billion\n"; } return $retval; } } $objTest = new Test(); $objTest->readnumber(120,0); ?>

    Read the article

  • Array not showing in Jlist but filled in console

    - by OVERTONE
    Hey there. been a busy debugger today. ill give ths short version. ive made an array list that takes names from a database. then i put the contents of the arraylist into an array of strings. now i want too display the arrays contents in a JList. the weird thing is it was working earlier. and ive two methods. ones just a little practice too make sure i was adding to the Jlist correctly. so heres the key codes. this is the layout of my code. variables constructor methods in my variables i have these 3 defined String[] contactListNames = new String[5]; ArrayList<String> rowNames = new ArrayList<String>(); JList contactList = new JList(contactListNames); simple enough. in my constructor i have them again. contactListNames = new String[5]; contactList = new JList(contactListNames); //i dont have the array list defined though. printSqlDetails(); // the prinSqldetails was too make sure that the connectionw as alright. and its working fine. fillContactList(); // this is the one thats causing me grief. its where all the work happens. // fillContactListTest(); // this was the tester that makes sure its adding to the list alright. heres the code for fillContactListTest() public void fillContactListTest() { for(int i = 0;i<3;i++) { try { String contact; System.out.println(" please fill the list at index "+ i); Scanner in = new Scanner(System.in); contact = in.next(); contactListNames[i] = contact; in.nextLine(); } catch(Exception e) { e.printStackTrace(); } } } heres the main one thats supposed too work. public void fillContactList() { int i =0; createConnection(); ArrayList<String> rowNames = new ArrayList<String>(); try { Statement stmt = conn.createStatement(); ResultSet namesList = stmt.executeQuery("SELECT name FROM Users"); try { while (namesList.next()) { rowNames.add(namesList.getString(1)); contactListNames =(String[])rowNames.toArray(new String[rowNames.size()]); // this used to print out contents of array list // System.out.println("" + rowNames); while(i<contactListNames.length) { System.out.println(" " + contactListNames[i]); i++; } } } catch(SQLException q) { q.printStackTrace(); } conn.commit(); stmt.close(); conn.close(); } catch(SQLException e) { e.printStackTrace(); } } i really need help here. im at my wits end. i just cant see why the first method would add to the JList no problem. but the second one wont. both the contactListNames array and array list can print fine and have the names in them. but i must be transfering them too the jlist wrong. please help p.s im aware this is long. but trust me its the short version.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >