Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 636/719 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Android NDK import-module / code reuse

    - by Graeme
    Morning! I've created a small NDK project which allows dynamic serialisation of objects between Java and C++ through JNI. The logic works like this: Bean - JavaCInterface.Java - JavaCInterface.cpp - JavaCInterface.java - Bean The problem is I want to use this functionality in other projects. I separated out the test code from the project and created a "Tester" project. The tester project sends a Java object through to C++ which then echo's it back to the Java layer. I thought linking would be pretty simple - ("Simple" in terms of NDK/JNI is usually a day of frustration) I added the JNIBridge project as a source project and including the following lines to Android.mk: NDK_MODULE_PATH=.../JNIBridge/jni/" JNIBridge/jni/JavaCInterface/Android.mk: ... include $(BUILD_STATIC_LIBRARY) JNITester/jni/Android.mk: ... include $(BUILD_SHARED_LIBRARY) $(call import-module, JavaCInterface) This all works fine. The C++ files which rely on headers from JavaCInterface module work fine. Also the Java classes can happily use interfaces from JNIBridge project. All the linking is happy. Unfortunately JavaCInterface.java which contains the native method calls cannot see the JNI method located in the static library. (Logically they are in the same project but both are imported into the project where you wish to use them through the above mechanism). My current solutions are are follows. I'm hoping someone can suggest something that will preserve the modular nature of what I'm trying to achieve: My current solution would be to include the JavaCInterface cpp files in the calling project like so: LOCAL_SRC_FILES := FunctionTable.cpp $(PATH_TO_SHARED_PROJECT)/JavaCInterface.cpp But I'd rather not do this as it would lead to me needing to update each depending project if I changed the JavaCInterface architecture. I could create a new set of JNI method signatures in each local project which then link to the imported modules. Again, this binds the implementations too tightly.

    Read the article

  • Django IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • VS2010 error: Unable to start debugging on the web server

    - by GarDavis
    I get error message "Unable to start debugging on the web server" in Visual Studio 2010. I clicked the Help button and followed the related suggestions without success. This happens with a newly created local ASP.Net project when modified to use IIS instead of Cassini (which works for debugging). It prompts to set debug="true" in the web.config and then immediately pops up the error. Nothing shows up in the Event Viewer. I am able to attach to w3wp to debug. It works but is not as convenient as F5. I also have a similar problem with VS2008 on the same PC. Debugging used to work for both. I have re-registered Framework 4 (aspnet_regiis -i). I ran the VS2010 repair (this is the RTM version). I am running on a Windows Server 2008 R2 x64 box. I do have Resharper V5 installed. There must be some configuration setting or registry value that survives the repair causing the problem. I'd appreciate any ideas. Thanks, Gary Davis ([email protected])

    Read the article

  • If the dependency is a unique snapshot version and install is called, what does maven select?

    - by chrsk
    Imagine two projects. The first is the framework-core project which is in version 1.1.0 and has several snapshot builds. The other is the example-business project which has the following dependency to framework-core on the build-iteration number 9. <dependency> <groupId>org.example</groupId> <artifactId>framework-core</artifactId> <version>1.1.0-20100518.134928-9</version> </dependency> What happens if mvn install is called on the framework-core? I found out that the artifact is copied to the folder and is named to *.1.1.0-SNAPSHOT.jar (as expected). This lead me to the assumption that this version is only used if even this 1.1.0-SNAPSHOT version is defined as dependency and not the precise build. To test something local without deploying it to the maven repository: call mvn install, change the dependency to 1.1.0-SNAPSHOT -- and the artifact just installed is used? Or is it possible to overwrite the specific build (with using the install lifecycle phase)?

    Read the article

  • Using Kerberos authentication for SQL Server 2008

    - by vivek m
    I am trying to configure my SQL Server to use Kerberos authentication. My setup is like this - My setup is like this- I have 2 virtual PCs in a Windows XP Pro SP3 host. Both VPCs are Windows Server 2003 R2. One VPC acts as the DC, DNS Server, DHCP server, has Active Directory installed and the SQL Server default instance is also running on this VPC. The second VPC is the domain member and it acts as the SQL Server client machine. I configured the SPN on the SQL Server service account to get the Kerberos working. On the client VPC it seems like it is using Kerberos authentication (as desired)- C:\Documents and Settings\administrator.SHAREPOINTSVC>sqlcmd -S vm-winsrvr2003 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- KERBEROS (1 rows affected) 1> but on the server computer (where the SQL Server instance is actually running) it looks like it is still using NTLM authentication- . This is not a remote instance, the sql server is local to this machine. C:\Documents and Settings\Administrator>sqlcmd 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- NTLM (1 rows affected) 1> What can i do so that it uses Kerberos on the server computer as well ? (or is this something that I should not expect)

    Read the article

  • Javascript timezone solution needed(taking into account the actual difference in UTC timestamps)

    - by user198729
    I have unix timestamps from time zone X which is not known, the current timestamp(now()) in TZ X is known 1275143019, how to approach a javascript function so that it can generate the datetime in the users current TZ in the format 2010-05-29 15:32:35 ? UPDATE I'm not a unix timestamp expert,if unix timestamp is always the same in different TZ, then I have to change the question a little,so that the current datetime in TZ X is known(like 2010-05-29 22:32:28),and the other datetime is also in this format,how to convert them to the user's TZ based on the difference between now() ? UPDATE Something strange from MySQL: On server: mysql> select now(); +---------------------+ | now() | +---------------------+ | 2010-05-29 18:34:30 | +---------------------+ 1 row in set (0.00 sec) mysql> select UNIX_TIMESTAMP(); +------------------+ | UNIX_TIMESTAMP() | +------------------+ | 1275143674 | +------------------+ 1 row in set (0.00 sec) On local: mysql> select now(); +---------------------+ | now() | +---------------------+ | 2010-05-29 22:41:30 | +---------------------+ 1 row in set (0.00 sec) mysql> select UNIX_TIMESTAMP(); +------------------+ | UNIX_TIMESTAMP() | +------------------+ | 1275144091 | +------------------+ 1 row in set (0.00 sec) Why the difference between now() (2010-05-29 22:41:30-2010-05-29 18:34:30=6hours) and UNIX_TIMESTAMP() (1275144091 - 1275143674 = 417seconds) is not the same ?

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

  • Updating entity fields in app engine development server

    - by Joey
    I recently tried updating a field in one of my entities on the app engine local dev server via the sdk console. It appeared to have updated just fine (a simple float). However, when I followed up with a query on the entity, I received an exception: "Items in the mSomeList list must all be Key instances". mSomeList is just another list field I have in that entity, not the one I modified. Is there any reason manually changing a field would adversely throw something off such that the server gets confused? Is this a known bug? I wrote an http handler to alter the field through server code and it works fine if I take that approach. Update: (adding details) I am using the python google app engine server. Basically if I go into the Google App Engine Launcher and press the SDK Console button, then go into one of my entities and edit a field that is a float (i.e. change it from 0 to 3.5, for instance), I get the "Items in the mMyList list must all be Key instance" suddenly when I query the entity like this: query = DataModels.RegionData.gql("WHERE mRegion = :1", region) entry = query.get() the RegionData entity is what has the mMyList member. As mentioned previously, if I do not manually change the field but rather do so through server code, i.e. query = DataModels.RegionData.gql("WHERE mRegion = :1", region) entry = query.get() entry.mMyFloat = 3.5 entry.put() Then it works.

    Read the article

  • Retrieve values from multimdimensional array

    - by vincentlerry
    I have a great difficulty. I need to retrieve [title], [url] and [abstract] values ??from this multidimensional array. Also, I have to store those values in mysql database. thanks in advance!!! Array ( [bossresponse] = Array ( [responsecode] = 200 [limitedweb] = Array ( [start] = 0 [count] = 20 [totalresults] = 972000 [results] = Array ( [0] = Array ( [date] = [clickurl] = http://www.torchlake.com/ [url] = http://www.torchlake.com/ [dispurl] = www.torchlake.com [title] = Torch Lake, COLI Inc, Highspeed, Dial-up, Wireless ... [abstract] = Welcome to COLI Inc. Chain O' Lake Internet. Local Northern Michigan ISP, offering Dialup Internet access, Wireless access, Web design, and T1 services in Northern ... ) [1] = Array ( [date] = [clickurl] = http://en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [url] = http://en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [dispurl] = en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [title] = Torch Lake (Antrim County, Michigan) - Wikipedia, the free ... [abstract] = Torch Lake at 19 miles (31 km) long is Michigan's longest lake and at approximately 18,770 acres (76 km²) is Michigan's second largest lake. Within it are several ... ) this is the entire code that generates this array: require("OAuth.php"); $cc_key = ""; $cc_secret = ""; $url = "http://yboss.yahooapis.com/ysearch/limitedweb"; $args = array(); $args["q"] = "car"; $args["format"] = "json"; $args["count"] = 20; $consumer = new OAuthConsumer($cc_key, $cc_secret); $request = OAuthRequest::from_consumer_and_token($consumer, NULL,"GET", $url, $args); $request-sign_request(new OAuthSignatureMethod_HMAC_SHA1(), $consumer, NULL); $url = sprintf("%s?%s", $url, OAuthUtil::build_http_query($args)); $ch = curl_init(); $headers = array($request-to_header()); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $rsp = curl_exec($ch); $results = json_decode($rsp, true);

    Read the article

  • How do i programmatically access the face cache in Windows Live Photo Gallery?

    - by acorderob
    I'm not talking about the "people tags" embeded in the XMP packets of JPEGs. I'm talking about the face database used to recognize new faces. I want to add to my program the option to recognize faces using the already trained database of WLPG. I managed to use the API (a type library dll) to detect faces, but to recognize them it needs an Exemplar Cache object that is not available in the same API. I could create my own object, but i want to use the already existing one to avoid duplicate training for the user. I know the database is in C:\Users\\AppData\Local\Microsoft\Windows Live Photo Gallery and that it is in an SQL Server Compact format. I tried to open the database with Visual Studio 2010, but it says that it is in an older version (pre-3.5) and needs to be upgraded. I don't want to change the database, just read it. I don't know how the WPLG reads it since apparently i don't have the correct OLEDB provider version. I would also prefer to read it without accesing the database directly but i don't see any DLL that exports that functionality. BTW, i'm using Delphi 2010. Any ideas?

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Boost's "cstdint" Usage

    - by patt0h
    Boost's C99 stdint implementation is awfully handy. One thing bugs me, though. They dump all of their typedefs into the boost namespace. This leaves me with three choices when using this facility: Use "using namespace boost" Use "using boost::[u]<type><width>_t" Explicitly refer to the target type with the boost:: prefix; e.g., boost::uint32_t foo = 0; Option ? 1 kind of defeats the point of namespaces. Even if used within local scope (e.g., within a function), things like function arguments still have to be prefixed like option ? 3. Option ? 2 is better, but there are a bunch of these types, so it can get noisy. Option ? 3 adds an extreme level of noise; the boost:: prefix is often = to the length of the type in question. My question is: What would be the most elegant way to bring all of these types into the global namespace? Should I just write a wrapper around boost/cstdint.hpp that utilizes option ? 2 and be done with it? Also, wrapping the header like so didn't work on VC++ 10 (problems with standard library headers): namespace Foo { #include <boost/cstdint.hpp> using namespace boost; } using namespace Foo; Even if it did work, I guess it would cause ambiguity problems with the ::boost namespace.

    Read the article

  • Has anyone setup tomcat to run virtual hosts using mod_jk

    - by Adam
    I work in OSX primarily with mostly PHP. Normally I work locally using MAMP and virtual hosts setup in my httpd.conf so that I can point a browser to http://some-project and have as many projects as I need setup. We have a project coming up where we need to serve JSP pages and I would like to set up my local apache server to serve only JSP files to Tomcat and everything else to MAMP using the same virtual hosts setup in: ~/applications/MAMP/conf/apache/httpd.conf So far I have: Successfully installed Tomcat Placed mod_jd.so in ~/applications/MAMP/Library/modules/mod_jk.so Added the module by placing: LoadModule jk_module modules/mod_jk.so in ~/applications/MAMP/conf/apache/httpd.conf Created /Library/Tomcat/Home/conf/jk/workers.properties and added the following lines: workers.tomcat_home=/Library/Tomcat workers.java_home=/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home ps=/ worker.list=ajp12, ajp13 worker.ajp13.port=8009 worker.ajp13.host=localhost worker.ajp12.type=ajp13 worker.ajp13.mount=/*.jsp added the following lines: JkWorkersFile /Library/Tomcat/Home/conf/workers.properties JkLogFile /Library/Tomcat/Home/logs/mod_jk.log JkLogLevel debug to ~/applications/MAMP/conf/apache/httpd.conf I cannot start my MAMP however when these last two lines are present in my httpd.conf. Does anyone work like this? Any tips? Any clear ideas of what I'm doing wrong?

    Read the article

  • Combobox having values but not showing text

    - by ras
    hi, I'm using GWT-EXT combobox. My problem is when I render the combobox, it's having as many rows as it has values but all the rows are empty means text is not shown. Here's my code. Combobox cb = new Combobox(); cb.setForceSelection(true); cb.setMinChars(1); cb.setWidth(200); cb.setStore(store); // Store is perfectly loaded in combobox cb.setDisplayField("ReportName"); cb.setMode(ComboBox.LOCAL); cb.setTriggerAction(ALL); cb.setEmptyText("--Select--"); cb.setLoadingText("Searching..."); cb.setTypeAhead(true); cb.setSelectOnFocus(true); All other code is working fine. I'm sure for one thing that this problem is related to one of the functions of Combobox. Thanks in advance.

    Read the article

  • Good case for a Null Object Pattern? (Provide some service with a mailservice)

    - by fireeyedboy
    For a website I'm working on, I made an Media Service object that I use in the front end, as well as in the backend (CMS). This Media Service object manipulates media in a local repository (DB); it provides the ability to upload/embed video's and upload images. In other words, website visitors are able to do this in the front end, but administrators of the site are also able to do this in the backend. I'ld like this service to mail the administrators when a visitor has uploaded/embedded a new medium in the frontend, but refrain from mailing them when they upload/embed a medium themself in the backend. So I started wondering whether this is a good case for passing a null object, that mimicks the mail funcionality, to the Media Service in the backend. I thought this might come in handy when they decide the backend needs to have implemented mail functionality as well. In simplified terms I'ld like to do something like this: Frontend: $mediaService = new MediaService( new MediaRepository(), new StandardMailService() ); Backend: $mediaService = new MediaService( new MediaRepository(), new NullMailService() ); How do you feel about this? Does this make sense? Or am I setting myself up for problems down the road?

    Read the article

  • Chrome targeted CSS

    - by Chris
    I have some CSS code that hides the cursor on a web page (it is a client facing static screen with no interaction). The code I use to do this is below: *, html { cursor: url('/web/resources/graphics/blank.cur'), pointer; } Blank.cur is a totally blank cursor file. This code works perfectly well in all browsers when I host the web files on my local server but when I upload to a Windows CE webserver (our production unit) the cursor represents itself as a black box. Odd. After some testing it seems that chrome only has a problem with totally blank cursor files when served from WinCE web server, so I created a blank cursor with one pixel as white, specifically for chrome. How do I then target this CSS rule to chrome specifically? i.e. *, html { cursor: url('/web/resources/graphics/blank.cur'), pointer; } <!--[if CHROME]> *, html { cursor: url('/web/resources/graphics/blankChrome.cur'), pointer; } <![endif]-->

    Read the article

  • Do all the HTML5 storage systems work together ?

    - by azera
    While there are a lot of good stuff about html5, one thing I don't get is the redondant storage mechanism, first there is localstorage and sessionstorage, which are key value stores, one is for one instance of the app ("one tab"), and the other works for all the instances of that application so they can share data. Both are saved when you close your browser and have a limited size (usually 5MB), that's great and everything would be nice if we stopped there. But then there is the "Web SQL Database", which has the same security system as the localstorage, the same size limit, the same everything except it works like/is sqlite, with tables and sql syntax and all of that. And the bummer is, they don't work on the same data at all ! This is not two way to access your data, this is really two storage for every html 5 app out there (not created by default yes, but still you see my point). What I would like to know is, is there a reason for both of this mechanisms to exist at the same time ? Or did they just look at sql and nosql movement to pick the best then went "screw it let's add both !" ? Why not implement local/session storage as a table inside web sql db ?

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Java applet loading images from external jars

    - by Mathias
    I have a jar on a server, and users should be able to develop extensions for it. Therefore the jars main class should be extended and some resources should be added to a second user created jar which will be loaded from another server or locally. Now I have problems accessing the resources (images) from the user loaded jars. Heres is the structure: My Server: game.jar containing game.class images.class ... image1.png (...) Local: user.jar containing: user.class extends game userimage.png The extension is loaded via Greasemonkey, it modifies the "archive" attribute to "/home/username/user.jar, game.jar" and the "code" attribute to "user.class". The user should be able to overwrite already defined images. If the image does not exist in game.jar, it is loaded correctly from user.jar. But the images loaded early in the game are always loaded from the game.jar, others seem to be overwritten correctly by the user. Is there a way to make sure they are always loaded in the correct order? This might be because of some caching mechanism. Because Greasemonkey removes the game from the page, changes the archive and code and reinsert it, the game is loaded without a mod for a brief second. In that time, images are loaded as expected from game jar, but those are the ones not being overwritable by the user. But how to avoid it? Another thing: If I overwrite the "run" method in user.class, the game is unable to load any image at all. Not from the user.jar and not from the game.jar. Java doesn't find the image, as the URL object "getClass().getResource(imagename)" returns with null. I tried to overwrite the image.class, but that doesn't fix the problem, unless I overwrite every class from game.class involved into calling the image.class

    Read the article

  • Communicating with all network computers regardless of IP address

    - by Stephen Jennings
    I'm interested in finding a way to enumerate all accessible devices on the local network, regardless of their IP address. For example, in a 192.168.1.X network, if there is a computer with a 10.0.0.X IP address plugged into the network, I want to be able to detect that rogue computer and preferrably communicate with it as well. Both computers will be running this custom software. I realize that's a vague description, and a full solution to the problem would be lengthy, so I'm really looking for help finding the right direction to go in ("Look into using class XYZ and ABC in this manner") rather than a full implementation. The reason I want this is that our company ships imaged computers to thousands of customers, each of which have different network settings (most use the same IP scheme, but a large percentage do not, and most do not have DHCP enabled on their networks). Once the hardware arrives, we have a hard time getting it up on the network, especially if the IP scheme doesn't match, since there is no one technically oriented on-site. Ideally, I want to design some kind of console to be used from their main workstation which looks out on the network, finds all computers running our software, displays their current IP address, and allows you to change the IP. I know it's possible to do this because we sell a couple pieces of custom hardware which have exactly this capability (plug the hardware in anywhere and view it from another computer regardless of IP), but I'm hoping it's possible to do in .NET 2.0, but I'm open to using .NET 3.5 or P/Invoke if I have to.

    Read the article

  • Apache Content negotiation returns wrong Content-Type with Mulitviews :(

    - by edbras
    I am using Apache webserver 2.2 with Content Negotiation through Multiviews. I have: <Directory /home/develop/web/prodBuild> Options +MultiViews AddEncoding x-gzip .gz </Directory> This directory contains the gzip files. So if a browser requests the file bla.js, the server will return the file bla.js.gz as the file bla.js doesn't exists. However, the Content-Type on my Ubuntu server is set to be "application/x-gzip" which is wrong as it's a javascript file, so is has be "application/javascript" I have this working on my local Windows Apache server, but don't seem to get it working on the remote Ubuntu server. :(... NO idea why... Who/why is it setting this wrong Content-Type ? BTW: I don't have this line "AddType application/x-gzip .gz .tgz" somwhere in my config, as then I understand why it goes wrong. My workaround: add the following line under the above "AddEncoding": AddType "" .gz It will then set an empty string as Content-Type and then it works.. I think that the browser will try to discover the content type itself... I hope somebody can explain to me why this is not working and why this content type is set ? :(

    Read the article

  • How do I make a serialization class for this?

    - by chobo2
    I have something like this (sorry for the bad names) <root xmlns="http://www.domain.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.Domain.com Schema.xsd> <product></product> <SomeHighLevelElement> <anotherElment> <lowestElement> </lowestElement> </anotherElment> </SomeHighLevelElement> </root> I have something like this for my class public class MyClass { public MyClass() { ListWrapper= new List<UserInfo>(); } public string product{ get; set; } public List<SomeHighLevelElement> ListWrapper{ get; set; } } public class SomeHighLevelElement { public string lowestElement{ get; set; } } But I don't know how to write the code for the "anotherElement" not sure if I have to make another wrapper around it. Edit I know get a error in my actual xml file. I have this in my tag xmlns="http://www.domain.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.Domain.com Schema.xsd Throws an exception on the root line saying there was a error with this stuff. So I don't know if it is mad at the schemaLocation since I am using local host right now or what.

    Read the article

  • identify accounts using zend-openid [HELP]

    - by tawfekov
    Hi! , i had successfully implemented simple open ID authentication with the [gmail , yahoo , myopenid, aol and many others ] exactly as stackoverflow do but i had a problem with identifying accounts for example 'openid_ns' => string 'http://specs.openid.net/auth/2.0' (length=32) 'openid_mode' => string 'id_res' (length=6) 'openid_op_endpoint' => string 'https://www.google.com/accounts/o8/ud' (length=37) 'openid_response_nonce' => string '2010-05-02T06:35:40Z9YIASispxMvKpw' (length=34) 'openid_return_to' => string 'http://dev.local/download/index/gettingback' (length=43) 'openid_assoc_handle' => string 'AOQobUf_2jrRKQ4YZoqBvq5-O9sfkkng9UFxoE9SEBlgWqJv7Sq_FYgD7rEXiLLXhnjCghvf' (length=72) 'openid_signed' => string 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle' (length=69) 'openid_sig' => string 'knieEHpvOc/Rxu1bkUdjeE9YbbFX3lqZDOQFxLFdau0=' (length=44) 'openid_identity' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) 'openid_claimed_id' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) i thought that openid_claimed_id or openid_identity is identical for every account but after some testing turns out the claimed id is changeable by the users them self in yahoo and randomly changing in case of gmail FYI am using zend openid with a bit modification on consumer class and i can't use SERG [Simple Refistration Extension] cuz currently it uses openid v1 specification :( i hadn't read the specification of openid so if any body know please help me to how to identify openid accounts to know if this account has signed in before or not

    Read the article

  • How to disable server-side caching on IIS 7.5 (asp net mvc3)

    - by troebr
    I'm struggling with my IIS setup regarding caching, here's a brief description of my problem: I'm making a site for mobile and non-mobile, sharing the same controllers. IE: mysite/page will serve either mysite/page.cshtml, or mysite/M/page.cshtml, depending on the device. Here's the catch, it worked fine with my local and integration environment (cassiini and iis 6), but on another machine (2008r2/iis 7.5), apparently there is an aggressive server-side caching policy: If I access the website from a desktop machine, I have the correct pages (desktop version) If now I use my mobile phone to access the site, I will have the desktop version, (which implies a server-side cache, my phone is not using the same network). On the contrary, if I were to restart the server and access the site using my phone first, then I will get the mobile version on my desktop (only for the pages I already visited of course). I tried 2 solutions so far: Disabling OutputCache from my Web.config: <httpModules> [..] <remove name="OutputCache" /> </httpModules> And unchecking "Enable output cache" in "Output Caching" for my site in IIS. What's bugging me is that I do not have this problem with my other server (iis 6.0), although caching is enabled on this one, which leads me to think it is related to iis 7 caching addition. My question is simple: how does one disable server-side caching on IIS 7.5? Thanks in advance for your iis lights!

    Read the article

  • Can we manipulate (subtract) the value of a property while template bidning?

    - by Subhen
    Hi, I am currently defining few grids as following: <Grid.RowDefinitions> <RowDefinition Height="{TemplateBinding Height-Height/5}"/> <RowDefinition Height="{TemplateBinding Height/15}"/> <RowDefinition Height="{TemplateBinding Height/20}"/> <RowDefinition Height="{TemplateBinding Height/6}"/> </Grid.RowDefinitions> While the division works fine , the subtraction isn't yielding the output. Ialso tried like following: <RowDefinition Height="{TemplateBinding Height-(Height/5)}"/> Still no result. Any suggestions plz. Thanks, Subhen ** Update ** Now In My XAML I tried implementing the IvalueConverter like : <RowDefinition Height="{TemplateBinding Height, Converter={StaticResource heightConverter}}"/> Added the reference as <local:medieElementHeight x:Key="heightConverter"/> In side generic.cs I have coded as following: public class medieElementHeight : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { //customVideoControl objVdeoCntrl=new customVideoControl(); double custoMediaElementHeight = (double)value;//objVdeoCntrl.customMediaPlayer.Height; double mediaElementHeight = custoMediaElementHeight - (custoMediaElementHeight / 5); return mediaElementHeight; } #region IValueConverter Members object IValueConverter.ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } #endregion } But getting the exception Unknown Element Height in the element RowDefination.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >