Search Results

Search found 16499 results on 660 pages for 'off rhoden'.

Page 381/660 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Tomcat gzip while chunked issue

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Content-Encoding: gzip Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recieved, but then after ungzipping I see that response is partial. I never seen such behavior with gzip turned off in request headers. So my question is: is it common tomcat issue? maybe one of it's mod which is doing compression? Or maybe it maybe some kind of proxy issue? I can't tell about versions of tomcat or what gzip mod they use, but feel free to ask, i'll try ask my service provider. Thanks.

    Read the article

  • IE not detecting jquery change method for checkbox

    - by user271619
    The code below works in FF, Safari, Chrome. But IE is giving me issues. When a checkbox is checked, I cannot get IE to detect it. $("#checkbox_ID").change(function(){ if($('#'+$(this).attr("id")).is(':checked')){ var value = "1"; }else{ var value = "0"; } alert(value); return false; }); Simply, I'm not getting that alert popup, as expected. I've even tried it this way: $("#checkbox_ID").change(function(){ if( $('#'+$(this).attr("id")'+:checked').attr('checked',false)){ var value = "1"; }else{ var value = "0"; } alert(value); return false; }); Here's the simple checkbox input: Anyone know if IE requires a different jquery method? or is my code just off?

    Read the article

  • Paypal adaptive payment API call with C# .NET? Preferably with WebServices

    - by Phil
    Okay I might be entirely off track now but here goes: Our "webshop" offers two functions, buying a specific product and selling it back to us. Back-end handles if the user can sell or not. I've decided to use Paypal's adaptive payments for this one as it seems the way to go doing these kinds of transactions. I've never implemented any kind of shop so I'm totally green with this one. I only recently learned ASP.NET and have mainly developed games before moving to this kind of development. HTTP is still some level of magic to me hehe.. I might be confused but I think paypal offers a webservice with their adaptive payment API. My humble request: A nice soul who wants to share an example of implementing an adaptive payment API call with C# .NET. If they don't offer it as a webservice I'll probably find it as a custom .dll or something. Any tips and examples are highly appreciated! Thanks for reading

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • How can I enjoy or avoid designing every web application I make ?

    - by schmrz
    I know this sounds silly, but I'm having huge problems (ok, not that huge, but still...) problems when I get an idea for a web project, small or big. The instant turn off is when I remember that I have to code the html/css by hand again and again. I like programming a lot more that designing web sites, and I simply don't enjoy designing them as much as I enjoy programming them. With that said, I also prefer simple and minimalistic designs. What is your approach in web design, how do you make it enjoyable (at least a little bit)?

    Read the article

  • Best Open Source Java CMS

    - by LuRsT
    I'm trying to find a good Java cms, I've stumbled uppon some that are quite good like: Apache Lenya, dotCMS, Info Glue, Open Edit, MMBase, Contelligent, Hippo CMS Which on do you guys recommend, or even one that I'm missing, because I have some more that I am studying at the moment. The requirements are that I can build modules for it with ease, and that it is open source and free, and with LDAP support. The problem is that I'm not that into Java in web, that's why I'm having trouble finding a good one. One Java cms like dotNetNuke would be the best. Edit: Jahia is off the list because it has no suport for LDAP (community version) Thanks!

    Read the article

  • AD Stopping a Script and Writing a Value to a User's AD Account PPT Presentation

    - by Steven Maxon
    ‘This will launch the PPT in a GPO Dim ppt Set ppt = CreateObject("PowerPoint.Application") ppt.Visible = True ppt.Presentations.Open "C:\Scripts\Test.pptx" ‘This is the batch file at the end of the PPT that records the date, time, computer name and username echo "Logon Date:%date%,Logon Time:%time%,Computer Name:%computername%,User Name:%username%" >> \\servertest\g$\Tracking\LOGON.TXT ‘This is what I need but can’t find: I need the script to check a value in the Active Directory user’s account in the Web page: attribute that would shut off the script if the user has already competed reading the presentation. Could be as simple as writing XXXX. I need the value XXXX written to the Active Directory user’s account in the Web page: attribute when they finish reading the presentation after they click on the bat file so the script will not run again when they log in.

    Read the article

  • Best practices for large solutions in Visual Studio (2008)

    - by Eyvind
    We have a solution with around 100+ projects, most of them C#. Naturally, it takes a long time to both open and build, so I am looking for best practices for such beasts. Along the lines of questions I am hoping to get answers to, are: how do you best handle references between projects should "copy local" be on or off? should every project build to its own folder, or should they all build to the same output folder(they are all part of the same application) are solutions folders a good way of organizing stuff? I know that splitting the solution up into multiple smaller solutions is an option, but that comes with its own set of refactoring and building headaches, so perhaps we can save that for a separate thread :-)

    Read the article

  • Problem serving video to iPad from Mongrel 1.1.5 , RoR 2.0.2 on Mac OS 10.6.3 Server

    - by reggieunderground
    I am working off the Final Cut Server Integration sample provided by Apple that is a Rails app using 2.0.2 and runs on Mongrel 1.1.5. I can have a basic non-Ruby directory on my boot volume (Mac OS 10.6.3 Server) and can serve up video files just fine to iPad. However, when I drag the same video file into the 'public' area of the app running on -p 3000 it will not play on iPad, but will in Safari 4.0.5 desktop version. Images work fine so I know it's not a permissions issue. I'm getting the crossed-out play icon on iPad so it is not an issue of waiting on a large file to download before playing. I suspect it is a Mongrel/Snow Leopard issue but why would Safari on desktop work fine and not Safari iPad? FYI, Integration Sample is at bottom of page: http://www.apple.com/finalcutserver/resources/ Any help is very much appreciated, Reggie

    Read the article

  • Issue with child of custom Decorator class in WPF

    - by galacticgrug
    I need a custom border that renders a little differently than a normal border. I made a class that inherited from Decorator as follows class BetterBorder : Decorator { protected override Size ArrangeOverride(Size arrangeSize) { return arrangeSize; } protected override void OnRender(DrawingContext dc) { //these values are calculated elsewhere dc.DrawGeometry(backgroundBrush, borderPen, pathGeometry); } } //Properties and helper methods below this All of this works fine until I try to add a child to the control, the control can be added but is not visible and seems to be moved off BetterBorders visible client area. If I inherit from Border everything works fine, what am I missing?

    Read the article

  • OpenLayers Projections.

    - by Jenny
    I can succesfully do: point.transform(new OpenLayers.Projection("EPSG:900913"), new OpenLayers.Projection("EPSG:4326")); To a point that is in the google format (in meters), but when I want to do the reverse: point.transform(new OpenLayers.Projection("EPSG:4326"), new OpenLayers.Projection("EPSG:900913")); to a point that is in 4326 (regular lat/lon format), I am having some issues. Any negative value seems to become NaN (not a number) when I do the transformation. Is there something about the transformation in reverse that I don't understand? Edit: Even worse, when I have no negative values, the coordinates seem off. I am getting the coordinates by drawing a square on the screen, then saving those coordinates to a database and loading them later. I can draw a square near the tip of africa (positive coordinates), and then when it loads it's near the top of africa, in the atlantic ocean. I'm definitely doing something wrong....

    Read the article

  • Most used .NET namespace

    - by Michael Prewecki
    What is your most commonly used namespace in .NET. I know it will vary greatly based upon the types of projects you develop but the stack overflow audience should provide a fairly decent sample set for the types of .NET projects being developed. I'm simply interest in the name of the namespace (one namespace per answer and no one person should have more than one answer, if someone else has the same answer as you then just upvote their answer). Try to be as specific as possible (so answering System, isn't helpful). I'm after this information to help new developers focus their attention on the most common .NET namespaces...there are after all thousands of them! To start off mine is almost certainly System.Collections.Generic, I use lists of things everywhere.

    Read the article

  • jQuery: check all checkboxes

    - by pcampbell
    Consider this scenario: <asp:CheckBoxList> in a master page. the goal is to have all checkboxes in this list to be checked on page load. there are many checkbox lists on the page. The markup: <asp:CheckBoxList runat="server" ID="chkSubscriptionType" DataSourceID="myDS" CssClass="boxes" DataTextField="Name" DataValueField="Name" /> renders to: <input id="ctl00_cphContent_chkSubscriptionType_0" type="checkbox" name="ctl00$cphContent$chkSubscriptionType$0" /> Question: how can you use jQuery to check all boxes in this asp:CheckBoxList on document.ready? I see samples everywhere, but naming convention used by the master page throws off the samples in other places.

    Read the article

  • How to free port 80 for xampp to work

    - by Alfie
    Please help, I used to be running xampp and it was working perfectly. Then I wanted to try something out and I ran IIS instead of xampp. Now I want to go back to using xampp, but whenever I try to run the apache server it says: Busy... Apache started [port 80] If I go to http://localhost/ then it just says that it can't establish a connection to the server. I have turned off IIS, so I don't see why it shouldn't work. Any suggestions?

    Read the article

  • Configure Active Relying Party STS to Trust Multiple Identity Provider STSes

    - by CodeChef
    I am struggling with the configuration for the scenario below. I have a custom WCF/WIF STS (RP-STS) that provides security tokens to my WCF services RP-STS is an "Active" STS RP-STS acts as a claims transformation STS RP-STS trusts tokens from many customer-specific identity provider STSes (IdP-STS) When a WCF Client connects to a service it should authenticate with it's local IdP-STS The reading that I've done describes this as Home Realm Discovery. HRD is usually described within the context of web applications and Passive STSes. My questions is, for my situation, does the logic for choosing an IdP-STS endpoint belong in the RP-STS or the WCF Client application? I thought it belonged in the RP-STS, but I cannot figure out the configuration to make this happen. RP-STS has a single endpoint, but I cannot figure out how to add more than one trusted issuer per endpoint. Any guidance on this would be very appreciated (I'm out of useful keywords to Google.) Also, if I'm way off please offer alternative approaches. Thanks!

    Read the article

  • C# run Javascript from button click

    - by ABB
    I have a button on my .aspx interstitial page. When I click it the onClick event fires off and it does a bunch of validations in the code. I have a javascript function that I need to call/run AFTER these validations are performed. This javascript function closes the interstitial page. How can I call the javascript function from my C# code? I've tried adding a script manager and a client script but neither work. What else besides these two options do I have? I'd be willing to use a hack if it works. Javascript I'm using: javascript:parent.interstitialBox.closeit(); return false

    Read the article

  • Is there an extensible SQL like query language that is safe for exposing via a public API?

    - by Lokkju
    I want to expose some spatial (and a few non-spatial) datasets via a public API. The backend store will either be PostgreSQL/PostGIS, sqlite/spatialite, or CouchDB/GeoCouch. My goal is to find a some, preferably standard, way to allow people to make complex spatial queries against the data. I would like it to be a simple GET based request. The idea is to allow safe SQL type queries, without allowing unsafe ones. I would rather modify something that is off the shelf than doing the entire thing myself. I specifically want to support requesting specific fields from a table; joining results; and spatial functions that are already implemented by the underlying datastore. Ideas anyone?

    Read the article

  • jQuery - How to remove a DOM element BEFORE complete page load

    - by webfac
    Now this may seem like a silly question, but I need to know how to remove a DOM element BEFORE it is displayed to the user. In short, I am using a div that has a background image alerting the user to enable javascript before proceeding. If the user has javascript enabled I am using jQuery to remove the DOM element, in this case $(".check-js") which is the div housing the image. Using the conventional methods to unload DOM objects as follows does not work because it waits for the page load to complete, then removes the element, causing the image to flicker on and off each time the page loads: $(function(){ $(".check-js").css( {display:"none"} ) }) I simply want to remove the div if the user has js enabled, and he must never see this div. Any suggestions and I will be grateful, thanks.

    Read the article

  • What effects drawing charts/diagrams from $_SESSION data under php5, which worked under php4?

    - by ste_php
    Hello, I have a script generating 3 diagrams from $_SESSION variables which work fine under php4 with register_globals = off, but when I parse the same script as php5 I get no diagram. The diagrams a drawn from GD libary and it works, if I set the data into an Array (manually filled) within the script file. But I need a way to get it work on php5, without much changes. Are there any SESSION settings or php-settings which might interfere with my script. I already checked a lot of the php-settings (changing php.ini over and over again), but found nothing what brings me the diagrams back. Hopefully someone of you could kick me into the right direction. Any Ideas? Thanks a lot.

    Read the article

  • What the performance impact of enabling WebSphere PMI

    - by Andrew Whitehouse
    I am currently looking at some JProfiler traces from our WebSphere-based application, and am noticing that a significant amount of CPU time is being spent in the class com.ibm.io.async.AsyncLibrary.getCompletionData2. I am guessing, but I am wondering whether this is PMI-related (and we do have this enabled). My knowledge of PMI is limited, as this is managed by another team. Is it expected that PMI can have this sort of impact? (If so) Is the only option to turn it off completely? Or are there some types of data capture that have a particularly high overhead?

    Read the article

  • What does a well formed XML or Schema look like for InfoPath form creation?

    - by Keith Sirmons
    Howdy, Are there any resources out there that defines how a well formed XML or Schema may look like for InfoPath? When designing a new form, there is an option to base the new form off of an existing XML document or XML Schema as the data source. I am looking for any guidelines or rules that will help me make sure the structure of the XML file I use to create the form will work the best it could work. I am creating the XML structure for another project, but we want to make sure the XML that is created would be InfoPath friendly for possible future applications. Thank you, Keith

    Read the article

  • Call a void* as a function without declaring a function pointer

    - by ToxIk
    I've searched but couldn't find any results (my terminology may be off) so forgive me if this has been asked before. I was wondering if there is an easy way to call a void* as a function in C without first declaring a function pointer and then assigning the function pointer the address; ie. assuming the function to be called is type void(void) void *ptr; ptr = <some address>; ((void*())ptr)(); /* call ptr as function here */ with the above code, I get error C2066: cast to function type is illegal in VC2008 If this is possible, how would the syntax differ for functions with return types and multiple parameters?

    Read the article

  • Interacting with RESTful API's via Javascript?

    - by Alex
    Hi there, to start off, I know C++, C#, Python, some Ruby, and basic Javascript. Anyway, my question revolves around how to interact with RESTful API's via Javascript. I haven't been able to find any good examples on various websites, and so I've come here. So my basic question is: How do I interact with RESTful API's via JS? And where can I find out how to implement OAuth in JS? I know how to get my keys and such, just not how to actually code them in. Below is an example of a twitter API status update run from my MAC terminal with curl: curl -u username:password -d "my tweet" http://api.twitter.com/1/statuses/update.json How can I implement this in Javascript (preferably with OAuth authentication)? This would at least start me going in the right direction. Thanks so much!!

    Read the article

  • How to increase detail band height dynamically

    - by Chandu
    Hi All, I am using ireport. My requirement is to increase the detail band height dynamically when the text field has more data , are there any settings to increase to it? I am using one textfield in the detail band when it has more information(words), it is displaying only some information.i.e the words are being cut off. Depending on the detail band height the words are displaying.I would like to increase the band hieght dynamically when the text field has more data. Please advice me on this regard. Regards, Chandu

    Read the article

  • Keep Windows Mobile 6 phone alive in C#

    - by QAH
    Hello! I am making an application for Windows Mobile 6.1 Pocket PC (Touchscreen). I know when a Pocket PC's screen turns off, it goes into a standby mode and applications are pretty much halted in the background. My application can't do that. It needs to keep going. So my question is, how can I keep the phone alive (backlight turned on) until my application is done? An example of this would be video streaming applications such as Youtube. It keeps the phone on while the video is playing. Thanks

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >