Search Results

Search found 4884 results on 196 pages for 'ad hoc distribution'.

Page 112/196 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • When and how should custom hierarchies be used in clojure?

    - by Rob Lachlan
    Clojure's system for creating an ad hoc hierarchy of keywords is familiar to most people who have spent a bit of time with the language. For example, most demos and presentations of the language include examples such as (derive ::child ::parent) and they go on to show how this can be used for multi-method dispatch. In all of the slides and presentations that I've seen, they use the global hierarchy. But it is possible to keyword relationships in custom hierarchies. Some questions, therefore: Are there any guidelines on when this is useful or necessary? Are there any functions for manipulating hierarchies? Merging is particularly useful, so I do this: (defn merge-h [& hierarchies] (apply merge-with (cons #(merge-with clojure.set/union %1 %2) hierarchies)) But I was wondering if such functions already exist somewhere.

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • get SSL Broken pipe error when try to make push notification

    - by emagic
    We develop an iPhone app, and have push notification for development and ad hoc version working properly. But when we try to send push notification to real user devices in our database, we got SSL connection reset, then Broken pipe error. We think maybe there are too many devices in our database (more than 70000), so it is failed to send all messages at the same time. So we try to send messages to 1000 devices once, but still got this "Broken pipe" error for around 100 messages. And we are not sure whether the messages have been send. Any suggestion?

    Read the article

  • Problem generating APN SSL certificate after submitting to apple store

    - by MikeQ
    I'm having trouble getting Apple to generate an APN SSL certificate for my app ID. I've submitted the application to the Apple store, and it is pending review. I tested the application using an Ad Hoc app ID "${bundle_id}.adHoc" and everything went fine. I submitted to the Apple store with app ID "${bundle_id}.release". Now I want to generate my production APN SSL certificate for use with my release application ID - but the developer portal doesn't want to. When I upload my certificate request, it sits for about a minute before telling me: "We are not able to generate your Profile at this time. Please try again later or try using the Provisioning Portal" Is it impossible to generate your certificate while the application is under review or something? Should I have generated it prior to submission?

    Read the article

  • STOP ERASING MY QUESTIONS! - VIEWING FIRST_ROWS BEFORE QUERY COMPLETES (RE-VISITED)

    - by Frank Developer
    OK, so say I have a table with 500K rows, then I ad-hoc query with unsupported indexing which requires a full table scan. I would like to immediately view the first rows returned while the full table scan continues. Then I want to scroll thru the next results. In the meantime, I would like to display the progress of the table scan, example: "SEARCHING.. FOUND 23 OF 500,000 ROWS SO FAR". If I scroll too far ahead, I want to display a message like: "REACHED LAST ROW IN LOOK-AHEAD BUFFER.. QUERY HAS NOT COMPLETED".. Can this be done? Maybe like: spawn/exec, declare scroll cursor, open, fetch, etc.?

    Read the article

  • Cross-domain REST proxy with Javascript, HTML5

    - by Bosh
    I'm writing a service (say, service.com) that provides a REST API to external apps running inside of IFrames. (These apps are hosted from domains outside the service.com). I'm planning a javascript client library for the apps to make pure-javascript requests to the service.com REST API -- basically using postMessage and some ad-hoc encapsulation of my API calls to get messages back and forth across frames (from the outside-app.com IFrame -- service.com REST API, and back to the IFrame with a response). My question: is there any robust, general-purpose javascript library to accomplish the kind of cross-domain REST request proxying I need, or should I just hack it from scratch?

    Read the article

  • Can I ask Postgresql to ignore errors within a transaction

    - by fmark
    I use Postgresql with the PostGIS extensions for ad-hoc spatial analysis. I generally construct and issue SQL queries by hand from within psql. I always wrap an analysis session within a transaction, so if I issue a destructive query I can roll it back. However, when I issue a query that contains an error, it cancels the transaction. Any further queries elicit the following warning: ERROR: current transaction is aborted, commands ignored until end of transaction block Is there a way I can turn this behaviour off? It is tiresome to rollback the transaction and rerun previous queries every time I make a typo.

    Read the article

  • Reporting framework for extending definition & execution to the end users (ASP.Net)

    - by Kabeer
    Hello. My application is a product which will have reporting capabilities. The product, when in production, is expected to have several ad-hoc report defined by the end users. I am looking for a platform that can be tailored to harness the business entities and extend reporting capabilities (definition & execution) to the end user. Here are some constraints: From the usability standpoint, I like what MS Access reports offer. But of course it is not suitable for the target web application. However, it certainly is a source of inspiration usability wise. Cost is a constraint. So something recommended from open source world will be appreciated. Otherwise too I'd like to know. The product in question is somewhat 'generic' in nature. Mine is a ASP.Net platform.

    Read the article

  • From interpeted to native code: "dynamic" languages compiler support

    - by Daniel
    First, I am aware that dynamic languages is a term used mainly by a vendor; I am using it just to have a container word to include languages like Perl (a favorite of mine), Python, Tcl, Ruby, PHP and so on. They are interpreted but I am interested here to refer to languages featuring strong capability to support the programmer efficiency and the support for typical constructs of modern interpreted languages My question is: there are dynamic languages can be compiled efficiently in native executable code - typically for Windows platforms? Which ones? Maybe using some third part ad-hoc tools? I am not talking about huge executables carrying with them a full interpreter or some similar tricks nor some smart module able to include its own dependances or some required modules, but a honest, straight, standard, solid executable code. If not, there is some technical reason inhibiting the availability of such a best-of-both-world feature? Thanks! Daniel

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • Data Guard - Snapshot Standby Database??

    - by Jian Zhang-Oracle
    ?? -------- ?????,??standby?????mount??????????REDO??,??standby????????????????????,???????read-only???open????,????ACTIVE DATA GUARD,????standby?????????(read-only)??(????????),????standby???????????(read-write)? ?????,?????????????Real Application Testing(RAT)??????????,?????????standby??????snapshot standby?????????,??snapshot standby??????????,???????????(read-write)??????snapshot standby??????????????,?????????,??????????,????????,?????????snapshot standby?????standby???,????????? ?? ---------  1.??standby?????? SQL> Alter system set db_recovery_file_dest_size=500M; System altered. SQL> Alter system set db_recovery_file_dest='/u01/app/oracle/snapshot_standby'; System altered. 2.??standby?????? SQL> alter database recover managed standby database cancel; Database altered. 3.??standby???snapshot standby,??open snapshot standby SQL> alter database convert to snapshot standby; Database altered. SQL> alter database open;    Database altered. ??snapshot standby??????SNAPSHOT STANDBY,open???READ WRITE: SQL> select DATABASE_ROLE,name,OPEN_MODE from v$database; DATABASE_ROLE    NAME      OPEN_MODE ---------------- --------- -------------------- SNAPSHOT STANDBY FSDB      READ WRITE 4.?snapshot standby???????????Real Application Testing(RAT)????????? 5.?????,??snapshot standby???physical standby,?????????? SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Database mounted. SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY; Database altered. SQL> shutdown immediate; ORA-01507: database not mounted ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Database mounted. SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Database altered. 5.?????standby?,???????PHYSICAL STANDBY,open???MOUNTED SQL> select DATABASE_ROLE,name,OPEN_MODE from v$database; DATABASE_ROLE    NAME      OPEN_MODE ---------------- --------- -------------------- PHYSICAL STANDBY FSDB      MOUNTED 6.??????????????? ????: SQL> select ads.dest_id,max(sequence#) "Current Sequence",            max(log_sequence) "Last Archived"        from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads        where ad.dest_id=al.dest_id        and al.dest_id=ads.dest_id        and al.resetlogs_change#=(select max(resetlogs_change#) from v$archived_log )        group by ads.dest_id;    DEST_ID Current Sequence Last Archived ---------- ---------------- -------------      1              361           361      2              361           362 --???? SQL>    select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"       from (select thread# thrd, max(sequence#) almax           from v$archived_log           where resetlogs_change#=(select resetlogs_change# from v$database)           group by thread#) al,          (select thread# thrd, max(sequence#) lhmax           from v$log_history           where resetlogs_change#=(select resetlogs_change# from v$database)           group by thread#) lh      where al.thrd = lh.thrd;     Thread Last Seq Received Last Seq Applied ---------- ----------------- ----------------          1               361              361 ??????????,???blog,???????????,??"??:Data Guard - Snapshot Standby Database??" 

    Read the article

  • Why do people still use C these days? [closed]

    - by Joshua
    C++ is clearly a far superior language than C, since it has many features that C lacks (although, C++'s object model isn't as ideal as say C#'s). With the coming off the new C++0x standard, why hasn't C been phased out to obscurity? C++ has been around for so long, since the '80s. The Linux kernel has already been ported to C++ with negligible performance differences. I believe, with no evidence, that larger program structures benefit in performance if written in C++ than in C, if only because of object interaction. Don't get me started on "objects-in-C!" libraries, which are all a terrible hack. (Not that C++'s object model is the most ideal, but it is almost up to snuff with C# using common ad-hoc techniques.)

    Read the article

  • How to 'hide' spurious "declared but never used" warnings?

    - by Roddy
    I'm using the C++Builder compiler which has a minor bug that certain static const items from system header files can cause spurious "xyzzy is declared but never used" warnings. I'm trying to get my code 100% warning free, so want a way of masking these particular warnings (note - but not by simply turning off the warning!) Also, I can't modify the header files. I need a way of 'faking' the use of the items, preferably without even knowing their type. As an example, adding this function to my .cpp modules fixes warnings for these four items, but it seems a bit 'ad-hoc'. Is there a better and preferably self-documenting way of doing this? static int fakeUse() { return OneHour + OneMinute + OneSecond + OneMillisecond; }

    Read the article

  • Can PHP Perform Magic Instantiation?

    - by Aiden Bell
    Despite PHP being a pretty poor language and ad-hoc set of libraries ... of which the mix of functions and objects, random argument orders and generally ill-thought out semantics mean constant WTF moments.... ... I will admit, it is quite fun to program in and is fairly ubiquitous. (waiting for Server-side JavaScript to flesh out though) question: Given a class class RandomName extends CommonAppBase {} is there any way to automatically create an instance of any class extending CommonAppBase without explicitly using new? As a rule there will only be one class definition per PHP file. And appending new RandomName() to the end of all files is something I would like to eliminate. The extending class has no constructor; only CommonAppBase's constructor is called. Strange question, but would be nice if anyone knows a solution. Thanks in advance, Aiden (btw, my PHP version is 5.3.2) Please state version restrictions with any answer.

    Read the article

  • unsetting application role in classic ASP

    - by user303526
    Hi, I'm trying to unset an application role but have been failing miserably. I was able to get the cookie value after setting (sp_setapprole) the application role. But I haven't been able to use that cookie (type varbinary / byte array) in my query to unset using sp_unsetapprole. If it was any other stored procedure it wouldn't have been a problem. I was able to use Command object and create a parameter which takes data type input of adVarBinary (204) and execute the command line.. but to the Server the query goes as below. exec sp_executesql N'sp_unsetapprole @P1 ',N'@P1 varbinary(36)',0x01000000CD11697F8F0ED3627BC1DAD25FB9CEB3A2EC5B289C658235E510CD9F29230000 Since sp_setapprole and sp_unsetapprole have to be run ad hoc, the sql server is failing to run this line. And I'm finding it hard to append varbinary cookie value to a simple query such as 'sp_unsetapprole ' & varKookie so it runs "directly" on to the server. Any kind of suggestions are welcome. Thanks, Nandagopal

    Read the article

  • What's the best way to capture output from SQL Management Studio and paste it into an Outlook email?

    - by Decker
    I'm constantly executing ad-hoc queries in SQL Management Studio and need to send the results to people via email. This happens several times a day so I'm looking for the best way to copy the results of the query from the results window into an Outlook email body so that it can be formatted in a reader friendly manner. I haven't come up with anything that works well for me. When it really matters, I end up going into Excel, executing the query from within there and then attaching the resulting spreadsheet. I'm looking for something that I can do without involving Excel if possible. Any ideas?

    Read the article

  • Are there any tools to optimize the number of consumer and producer threads on a JMS queue?

    - by lindelof
    I'm working on an application that is distributed over two JBoss instances and that produces/consumes JMS messages on several JMS queues. When we configured the application we had to determine which threading model we would use, in particular the number of producing and consuming threads per queue. We have done this in a rather ad-hoc fashion but after reading the most recent columns by Herb Sutter in Dr Dobbs (in particular this one) I would like to size our threads in a more rigorous manner. Are there any methods/tools to measure the throughput of JMS queues (in particular JBoss Messaging queues) as a function of the number of producing/consuming threads?

    Read the article

  • How do I tell NHibernate to load a component as not null even when all its properties are null?

    - by SharePoint Newbie
    Hi, I have a Date class which wraps over the DateTime? class (aids in mocking DateTime.Now, our domain ,etc). The Date class class only has one protected property : DateTime? date public class Date { protected DateTime? date; } // mapping in hbm <component name="CompletedOn"> <property column="StartedOn" name="date" access="field" not-null="false" /> </component> From the nhibernate docs: Like all value types, components do not support shared references. The null value semantics of a component are ad hoc. When reloading the containing object, NHibernate will assume that if all component columns are null, then the entire component is null. This should be okay for most purposes. Can I override this behaviour? I want my Date class to be instantiated even if date is null. Thanks,

    Read the article

  • How cast in XML for aggregate functions

    - by renegm
    In SQL Server 2008. I need execute a query like that: DECLARE @x AS xml SET @x=N'<r><c>First Text</c></r><r><c>Other Text</c></r>' SELECT @x.query('fn:max(r/c)') But return nothing (apparently because convert xdt:untypedAtomic to numeric) How to "cast" r/c to varchar? Something like SELECT @x.query('fn:max(«CAST(r/c «AS varchar(20))»)') Edit: Using Nodes the function MAX is from T-SQL no fn:max function In this code: DECLARE @x xml; SET @x = ''; SELECT @x.query('fn:max((1, 2))'); SELECT @x.query('fn:max(("First Text", "Other Text"))'); both query return expected: 2 and "Other Text" fn:max can evaluate string expression ad hoc. But the first query dont work. How to force string arguments to fn:max?

    Read the article

  • Firefox - Stashing Requests for Deliberate Resubmission to Django App

    - by Koobz
    I've got an object creation form that's somewhat complicated, it contains a few dynamic formsets etc. I'm trying to ensure that these dynamic formsets are intact if the form runs into an error and returns you to the given page. In cases like this, the refresh button actually works well in re-submitting the request, but I can't rely on it. I'm doing some ad-hoc testing in the browser that I'd like to make a bit more repeatable, and eventually move to a unit test using Django's mock client. Is there an extension, or some convenient method to stash requests for later re-submission. The goal: I resubmit the request, tweak the code, eyeball the results, rinse and repeat. Three days later I can come back to it an try it again to make sure it's still working. The closest thing I can think of in this case is simply recording my activity with Selenium ide and replaying it.

    Read the article

  • Possible to see what actual SQL queries Rails invokes when using console script?

    - by randombits
    Sometimes I like to pop open the console script that comes with Rails to test small excerpts of code. That code normally involves some more involved ActiveRecord queries. Although not an expert in ActiveRecord, I'm proficient with SQL and want to see what it's translating underneath the hood for efficiency purposes. This will help me refactor or rethink how I'm writing my app if it looks inefficient. Now when the query is in the actual application itself, it all shows up in logs. Ad-hoc ActiveRecord queries in the console do not though. Anyway to change that behavior?

    Read the article

  • How can I configure different worker pools using celery?

    - by Chris R
    I need to deploy a queued execution service with (generally) the following three classes of worker: A periodic, low-priority job class that takes a long time and can be processed serially; these jobs should only use 0..2 workers in the system at most. A periodic, deadline-sensitive job class that take a short to medium amount of time (say, topping out at 5 minutes) An ad-hoc job class, that is higher priority than #1, but can interleave with #2. Any workers from class #2 that are inactive when this type of job comes in should handle it, without ever starving the pool of workers for #2 All three job classes are the same task, the only difference between them is how they're requested; they'll take the same input and generate the same output, but each one has different performance guarantees. How can I implement this using celery?

    Read the article

  • Begin Viewing Query Results Before Query Ends

    - by Frank Developer
    OK, so say I have a table with 500K rows, then I ad-hoc query with unsupported indexing which requires a full table scan. I would like to immediately view the first rows returned while the full table scan continues. Then I want to scroll thru the next results. In the meantime, I would like to display the progress of the table scan, example: "SEARCHING.. FOUND 23 OF 500,000 ROWS SO FAR". If I scroll too far ahead, I want to display a message like: "REACHED LAST ROW IN LOOK-AHEAD BUFFER.. QUERY HAS NOT COMPLETED".. Can this be done? Maybe like: spawn/exec, declare scroll cursor, open, fetch, etc.?

    Read the article

  • Why does XCode keep downloading old deleted profiles and duplicates of the same profile?

    - by Piepants
    If I refresh the profiles in XCode it: a) Pulls down ones that no longer exist. Profiles that have been deleted from the portal and are no longer there b) Pulls down multiple copies of the same profile. If I add a new device and then update the profiles to include that new device. Then XCode will pull down the new updated profile but also the same profile with an older date (even though the portal only shows one, the latest). If I delete them them XCode they re-appear. I'm having problems getting push notifications to work with an ad-hoc distrubtion and so want to ensure I am build with the latest profiles. This behaviour of XCode is irritating at least, and possibly the source of my problems at worst.

    Read the article

  • How to merge or copy anonymous session data into user data when user logs in?

    - by benhoyt
    This is a general question, or perhaps a request for pointers to other open source projects to look at: I'm wondering how people merge an anonymous user's session data into the authenticated user data when a user logs in. For example, someone is browsing around your websites saving various items as favourites. He's not logged in, so they're saved to an anonymous user's data. Then he logs in, and we need to merge all that data into their (possibly existing) user data. Is this done different ways in an ad-hoc fashion for different applications? Or are there some best practices or other projects people can direct me to?

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >