Search Results

Search found 1886 results on 76 pages for 'vendor neutrality'.

Page 65/76 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Strange MSI error when setup.exe is run

    - by Martin Jackson
    We're using Visual Studio 2008's Setup Project to create an installer for our .NET 3.5 app. We host the .exe and .msi files on a website for our client to access, and produce new ones regularly to provide updates. This has all been fine until recently we've noticed some cases where installing via the .exe fails. The symptoms are: The .exe downloads fine, and runs fine. It appears to download the .msi successfully (the "downloading application files" step plods through happily), but then when it gets to the end of the "preparing to install" step, instead of launching the installer UI it pops up a message saying "This installation package could not be opened. Verify that the package exists and that you can access it, or contact the application vendor to verify that this is a valid Windows Installer package". You'd think that the .msi is just corrupt or something, but running it explicitly (even downloading it from the same location as the .exe to do so) works just fine. This problem is occurring on just some of our machines, which are running a mixture of XP and Windows7. The only pattern I can see in those machines that experience the problem is that they tend to have had the application installed on them longer (i.e. updating the app rather than installing for the first time). It seems to me that it might be something to do with how/where the .exe downloads the .msi to, and perhaps different versions are conflicting there? Has anyone experienced this before? Does anyone know where the installer .exe puts the .msi that it downloads?

    Read the article

  • WiX: Forcefully launch uninstall previous using CustomAction

    - by leiflundgren
    I'm writing a new major upgrade of our product. In my installer I start by finding configuration settings of the previous version, then I'd like to uninstall the previous version. I have found several guides telling me how one should make a MSI suitable for such upgrades. However, the previous was not an MSI. It was not according to best practices. It does, however, in registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall{GUID} specify an UninstallString. Using a RegistrySearch I can easy find the command below, which I store in UNINSTALL_CMD. RunDll32 C:\PROGRA~1\COMMON~1\INSTAL~1\PROFES~1\RunTime\10\01\Intel32\Ctor.dll,LaunchSetup "C:\Program Files\InstallShield Installation Information\{GUID}\setup.exe" -l0x9 -removeonly 4: I cannot get the hang of the CustomAction needed to perform the actual uninstall. <CustomAction Id="ca.UninstPrev" Property="UNINSTALL_CMD" ExeCommand="" /> The MSI logs says: Info 1721. There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor. Action: ca.UninstallPrevious, location: RunDll32 C:\PROGRA~1\COMMON~1\INSTAL~1\PROFES~1\RunTime\10\01\Intel32\Ctor.dll,LaunchSetup "C:\Program Files\InstallShield Installation Information{GUID}\setup.exe" -l0x9 -removeonly, command: Anyone seeing what I am doing wrong here? Regards Leif

    Read the article

  • How do I send an XML document to an ASP.NET MVC page for manipuation

    - by Decker
    I have some hierarchical data stored as an multiple XML files on the server according to a vendor's schema. In my ASP.NET MVC (2!) application, I'd like the user to choose one of these hierarchies (i.e. file -- I provide a list in my controller's Index action). When the user selects one to "edit" my edit action should return a page that presents the XML hierarchy (it's a representation of a folder tree). So my thoughts are that the view would return HTML that contained a JQuery on load ajax call back to the server for the XML data -- at which point I would present the tree using one of the many JQuery tree controls. On the client side I'd like the user to manipulate the tree and when done, I'd like to post back the new hierarchy where I would replace the original XML file that represents that hierarchy. So my questions are: What form should I use to send the data down? XML or JSON?. If I send down XML then I would have to not only read the XML -- which JQuery can do -- but I would also have to be able to modify that XML and then send it back. Can I use JQuery to modify this XML DOM? And will all the namespace declarations be preserved? What form should I send the data back? If I originally sent the client the hierarchy as JSON (using JsonResult), then presumably I would have a hierarchy of javascript objects. What options would I have to post that back? Would I have to recreate the XML reprentation on the client and post that back? Or should I serialize back to JSON, post that to the server, and then have the server do the work of recreating the XML according to the schema. Thanks for any advice.

    Read the article

  • Rails 2.3.2 trying to render ERB instead of HAML

    - by c00lryguy
    Rails is suddenly trying to render ERB instead of Haml and I can't figure out why. I've created new rails projects, reinstalled Haml, and reinstalled Rails. Here's exactly the steps I take when making my application (Rails 2.3.2): rails> rails test rails> cd test rails\test> haml --rails . rails\test> ruby script\generate model user email:string password:string rails\test> ruby script\generate controller users index rails\test> rake db:migrate Here's what the UsersController looks like: class UsersController < ApplicationController def index @users = User.all end end My routes: ActionController::Routing::Routes.draw do |map| map.resources :users end I now create views\users\index.html.haml: %table %th(style="text-align: left;") %h1 Users - for user in @users %tr %td= user.email %td= user.password Annnd run the server... I navigate to localhost:3000\users and I get this error message: Template is missing Missing template users/index.erb in view path app/views For some reason Rails is trying to find and render .erb files instead of .haml files. vendor\plugins\haml\init.rb exists, untouched. I've reinstalled Haml (Pretty Penny) multiple times and still get the same results. I've also tried adding config.gem 'haml' to my environment.rb but this also doesn't work. I can't figure out why suddenly rails will not render haml for me.

    Read the article

  • Animate opacity and delay transition CSS

    - by user1876246
    I have a series of DIVS, I want DIV 1 & DIV 3 to fade out and then DIV 2 and DIV 4 to slide left to take their place, 1 second after the fade. So far I have gotten them to fade out but I cannot figure out how to delay the sliding. Follows is my CSS, ignore the lack of vendor prefixes for this question please. .slide-show{ -webkit-animation: fadeShow 0.25s 1 normal forwards ease-out; animation: fadeShow 0.25s 1 normal forwards ease-out; visibility: visible; } .slide-hide{ -webkit-animation: fadeHide 0.25s 1 normal forwards ease-out; animation: fadeHide 0.25s 1 normal forwards ease-out; //I need the following to be delayed for 1 second visibility: hidden; position: absolute; } @keyframes fadeHide{ 0% { opacity: 1; } 100% { opacity: 0; } } @keyframes fadeShow{ 0% { opacity: 0; } 100% { opacity: 1; } }

    Read the article

  • Spring and hibernate.cfg.xml

    - by Steve Kuo
    How do I get Spring to load Hibernate's properties from hibernate.cfg.xml? We're using Spring and JPA (with Hibernate as the implementation). Spring's applicationContext.xml specifies the JPA dialect and Hibernate properties: <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="jpaDialect"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" /> </property> <property name="jpaProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</prop> </props> </property> </bean> In this configuration, Spring is reading all the Hibernate properties via applicationContext.xml . When I create a hibernate.cfg.xml (located at the root of my classpath, the same level as META-INF), Hibernate doesn't read it at all (it's completely ignored). What I'm trying to do is configure Hibernate second level cache by inserting the cache properties in hibernate.cfg.xml: <cache usage="transactional|read-write|nonstrict-read-write|read-only" region="RegionName" include="all|non-lazy" />

    Read the article

  • [Doxygen] How to documenting global dependencies for functions?

    - by Thomas Matthews
    I've got some C code from a 3rd party vendor (for an embedded platform) that uses global variables (for speed & space optimizations). I'm documenting the code, converting to Doxygen format. How do I put a note in the function documentation that the function requires on global variables and functions? Doxygen has special commands for annotating parameters and return values as describe here: Doxygen Special Commands. I did not see any commands for global variables. Example C code: extern unsigned char data_buffer[]; //!< Global variable. /*! Returns the next available data byte. * \return Next data byte. */ unsigned char Get_Byte(void) { static unsigned int index = 0; return data_buffer[index++]; //!< Uses global variable. } In the above code, I would like to add Doxygen comments that the function depends on the global variable data_buffer.

    Read the article

  • Office Automation: What does destroy my encoding?

    - by Filburt
    I'm facing a problem with a Word Mail Merge Automation controlled by our CRM system. The setup Base for the Mail Merge is a Word .dot template which fires a macro on Document.New. Inside this macro I create a .Net component registered for COM. Set myCOMObject = CreateObject("MyCOMObject") The component pulls some data from a database and hands string values which are assigned to Word DocumentVariables. Set someClass = myCOMObject.GetSomeClass(123) ActiveDocument.Variables("docaddress") = someClass.GetSenderAddress(456) All string values returned from the component are encoded in UTF-8 (codepage 1200). What happens The problem arises when the CRM system calls Word to perform the Mail Merge: The string values from the component are turned into UTF-8 encoded strings. All the static text inside the template and the data pulled for the Mail Merge stay nicely encoded in UTF-16 - example the umlaut ü inside my DocumentVariables is turned into c3 b0 while it stays fc for the rest of the document (checked file in hex editor). If I'm creating a document from a template with the same macro functionallity but without performing a Mail Merge all strings are fine; i.e. are encoded in UTF-16. What changed According to the CRM software vendor the encoding of the Mail Merge data export was changed to UTF-16 with the new version we're currently testing. I found out that MS states that you'll expirience issues when the document and the Mail Merge data file encoding don't match. What I tried Since I'm assuming to merge with UTF-16 encoded data I added the following lines to my macro: ActiveDocument.TextEncoding = msoEncodingWestern ActiveDocument.SaveEncoding = msoEncodingUnicodeLittleEndian This is what the Mail Merge data document specifies in its document properties.

    Read the article

  • 32 bit dllimport generating incorrect format error (0x8007000b) on win7 x64 platform

    - by DFP
    Hello, I'm trying to install and run a 32 bit application on a Win7 x64 machine. The application is built as a Win32 app. It runs fine on 32 bit platforms. On the x64 machine it installs correctly in the Programs(x86) directory and runs fine until I make a call into a 32 bit dll. At that time I get the incorrect format error (0x8007000b) indicating it is trying to load the dll of the wrong bitness. Indeed it is trying to load the 64 bit dll from the System32 directory rather than the 32 bit version in the SystemWOW64 directory. Another 32 bit application provided by the dll vendor runs correctly and it does load the 32 bit dll from the SystemWOW64 directory. I do not have source to their application to see how they are accessing the DLL. I'm using the DllImport function as shown below to access the dll. Is there a way to decorate the DllImport calls to force it to load the 32 bit version? Any thoughts appreciated. Thanks, DP public static class Micronas { [DllImport(@"UAC2.DLL")] public static extern short UacBuildDeviceList(uint uFlags); [DllImport(@"UAC2.DLL")] public static extern short UacGetNumberOfDevices(); [DllImport(@"UAC2.DLL")] public static extern uint UacGetFirstDevice(); [DllImport(@"UAC2.DLL")] public static extern uint UacGetNextDevice(uint handle); [DllImport(@"UAC2.DLL")] public static extern uint UacSetXDFP(uint handle, short adr, uint data); [DllImport(@"UAC2.DLL")] public unsafe static extern uint UacGetXDFP(uint handle, short adr, IntPtr data); }

    Read the article

  • How can I change the text in a <span></span> element using jQuery?

    - by Eric Reynolds
    I have a span element as follows: <span id="download">Download</span>. This element is controlled by a few radio buttons. Basically, what I want to do is have 1 button to download the item selected by the radio buttons, but am looking to make it a little more "flashy" by changing the text inside the <span> to say more specifically what they are downloading. The span is the downloading button, and I have it animated so that the span calls slideUp(), then should change the text, then return by slideDown().Here is the code I am using that does not want to work. $("input[name=method]").change(function() { if($("input[name=method]").val() == 'installer') { $('#download').slideUp(500); $('#download').removeClass("downloadRequest").removeClass("styling").css({"cursor":"default"}); $('#download').text("Download"); $('#download').addClass("downloadRequest").addClass("styling").css({"cursor":"pointer"}); $('#download').slideDown(500); } else if($("input[name=method]").val() == 'url') { $('#download').slideUp(500); $('#download').removeClass("downloadRequest").removeClass("styling").css({"cursor":"default"}); $('#download').text("Download From Vendor Website"); $('#download').addClass("styling").addClass("downloadRequest").css({"cursor":"pointer"}); $('#download').slideDown(500); } }); I changed the code a bit to be more readable so I know that it doesn't have the short code that jQuery so eloquently allows. Everything in the code works, with the exception of the changing of the text inside the span. I'm sure its a simple solution that I am just overlooking. Any help is appreciated, Eric R.

    Read the article

  • Shoulda and Paperclip testing

    - by trobrock
    I am trying to test a couple models that have an attachment with Paperclip. I have all of my validations passing except for the content-type check. # myapp/test/unit/project_test.rb should_have_attached_file :logo should_validate_attachment_presence :logo should validate_attachment_size(:logo).less_than(1.megabyte) should_validate_attachment_content_type :logo, :valid => ["image/png", "image/jpeg", "image/pjpeg", "image/x-png"] # myapp/app/models/project.rb has_attached_file :logo, :styles => { :small => "100x100>", :medium => "200x200>" } validates_attachment_presence :logo validates_attachment_size :logo, :less_than => 1.megabyte validates_attachment_content_type :logo, :content_type => ["image/png", "image/jpeg", "image/pjpeg", "image/x-png"] The errors I am getting: 1) Failure: test: Client should validate the content types allowed on attachment logo. (ClientTest) [/Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/assertions.rb:55:in `assert_accepts' vendor/plugins/paperclip/shoulda_macros/paperclip.rb:44:in `__bind_1276100387_499280' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `call' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `test: Client should validate the content types allowed on attachment logo. ']: Content types image/png, image/jpeg, image/pjpeg, image/x-png should be accepted and rejected by logo This happens on two different models that are set up the same way.

    Read the article

  • Console app showing message box on error

    - by holz
    I am trying to integrate with a vendors app by calling it with command args from c#. It is meant to automate a process that we need to do with out needing anyone to interact with the application. If there are no errors when running the process it works fine. However if there are any errors the vendors application will show a message box with the error code and error message and wait for someone to click the ok button. When the ok button is clicked it will exit the application returning the error code as the exit code. As my application is going to be a windows service on a server, needing someone to click an okay button will be an issue. Just wondering what the best solution would be to get around this. My code calling the vendor app is... ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.FileName = "someapp.exe" startInfo.Arguments = "somefile.txt"; Process jobProcess = Process.Start(startInfo); jobProcess.WaitForExit(); int exitCode = jobProcess.ExitCode;

    Read the article

  • What's the best way to read a UDT from a database with Java?

    - by Lukas Eder
    I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput): An input stream that contains a stream of values representing an instance of an SQL structured type or an SQL distinct type. This interface, used only for custom mapping, is used by the driver behind the scenes, and a programmer never directly invokes SQLInput methods. This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to ResultSet.getObject(int index, Map mapping) The JDBC driver will then call-back on my custom type using the SQLData.readSQL(SQLInput stream, String typeName) method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT. To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way: I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc. I can generate source code from my UDTs, with appropriate getters/setters and other properties My questions: What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise? What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do? Addendum: UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.

    Read the article

  • Loading a class file immediately AFTER startup

    - by Striker
    We have a few war files deployed inside an ear file. Some of the war files have a class that caches static data from our PLM system in singletons. Since some of the classes take several minutes to load we use the load-on-startup in the web.xml to load them ahead of time. This all works fine until we attempt to re-deploy the application on our production servers. (WebLogic 10.3) We get an exception from our PLM API about a dll already being loaded. Our PLM vendor has confirmed that this is a problem and stated that they don't support using the load-on-startup. This is also a huge problem on our development boxes where we have redeploy the app all the time. Most of us, when we're not working on one of the apps that uses a cache, have them commented out. Obviously we can't do that for the production servers. Right now we transfer the ear to the production server, deploy it in the console, wait for it to crash, shut the app server instance down and then start it up again. We need to find a way around this... One suggestion was to create a servlet that we can call after the server boots that will load the various caches. While this will work I'm looking for something a bit cleaner. Is there anyway to detect once the server started and then fire off the methods? Thanks.

    Read the article

  • Reporting tool for OLAP, *not* OLTP!

    - by Stefan Moser
    I'm looking for a control that I can put on top of an already existing OLAP star schema to allow the user to define their own "queries" and generate reports. Right now I have some predefined reports built on top of the cubes, but I'd like to allow the user to define their own criteria based on the cubes that I've created. I've found lots of products that will allow you to treat a transactional table like an OLAP cube, but nothing specifically for pre-existing cubes. EDIT: Let me be clear, I know there are countless reporting tools out there that claim to report on OLAP cubes. The problem is they all assume they are looking at transactional data and try to create their own cubes. I have tables that contain tens, if not hundreds of millions of records. Most tools crash when handling this much data, the others just run incredible slowly. I don't want a tool that is targeting the business people. I want a tool that understands what a star and snowflake schema is. I want to be able to tell it what the fact tables are and what the dimension tables are, and then creates a UI on top of them. This is an easier problem to solve for the tool vendor because I am spoon feeding them the cubes. I want to rely on the fact that cubes are a standardized pattern and I want a tool that takes advantage of this fact. I want a tool that targets developers and starts with the assumption that I actually know how to manage my data, it just needs to build pretty reports for me and not crumble under the weight of my data.

    Read the article

  • Upload An Excel File in Classic ASP On Windows 2003 x64 Using Office 2010 Drivers

    - by alphadogg
    So, we are migrating an old web app from a 32-bit server to a newer 64-bit server. The app is basically a Classic ASP app. The pool is set to run in 64-bit and cannot be set to 32-bit due to other components. However, this breaks the old usage of Jet drivers and subsequent parsing of Excel files. After some research, I downloaded the 64-bit version of the new 2010 Office System Driver Beta and installed it. Presumably, this allows one to open and read Excel and CSV files. Here's the snippet of code that errors out. Think I followed the lean guidelines on the download page: Set con = Server.CreateObject("ADODB.Connection") con.ConnectionString = "Provider=Microsoft.ACE.OLEDB.14.0;Data Source=" & strPath & ";Extended Properties=""Excel 14.0;""" con.Open Any ideas why? UPDATE: My apologies. I did forget the important part, the error message: ADODB.Connection error '800a0e7a' Provider cannot be found. It may not be properly installed. /vendor/importZipList2.asp, line 56 I have installed, and uninstalled/reinstalled twice.

    Read the article

  • How to set UCS2 in numpy?

    - by mindcorrosive
    I'm trying to build numpy 1.2.1 as a module for a third-party python interpreter (custom-built, py2.4 linux x86_64) so that I can make calls to numpy from within it. Let's call this one interpreter A. The thing is, the system-wide python interpreter (also py2.4, let's call it B) from the vendor is built with --enable-unicode=ucs4, while the custom one is with UCS2. Needless to say, when I try to build a module with B, I get an error when I try to import numpy in A -- it complains about undefined symbol _PyUnicodeUCS4_IsWhiteSpace. I've searched around and apparently there's no way around this but to compile a custom Python interpreter -- which I did (let's call it interpreter C), properly specifying the unicode string length (verifiable through sys.maxunicode). I managed to build numpy with C as well, surprisingly enough, but still the problem persists when I try to import it in interpreter C. Previously, when I built numpy using B, there were no problems when importing it in B, but A would complain. Perhaps there's an option when building numpy to specify the length of Unicode strings to be used, as when configuring Python builds? Or am I doing something else wrong? A few notes: Upgrading to newer versions of python and/or numpy is not an option - interpreter A will stay on this version of the grammar for the foreseeable future. Also, it is not possible to start the interpreter A in standalone mode to build numpy with it, as it needs some other libraries preloaded I know that this whole thing is a mess, but I'd appreciate any help I can get to make this work. If you need more information, please let me know, I'd be happy to oblige. Thanks to everybody for their time in advance.

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest.

    Read the article

  • Systems design question: DB connection management in load-balanced n-tier

    - by aoven
    I'm wondering about the best approach to designing a DB connection manager for a load-balanced n-tier system. Classic n-tier looks like this: Client -> BusinessServer -> DBServer A load-balancing solution as I see it would then look like this: +--> ... +--+ +--> BusinessServer +--+--> SessionServer --+ Client -> Gateway --+--> BusinessServer +--| +--> DBServer +--> BusinessServer +--+--------------------+ +--> ... +--+ As pictured, the business server component is being load-balanced via multiple instances, and a hardware gateway is distributing the load among them. Session server probably needs to be situated outside the load-balancing array, because it manages state, which mustn't be duplicated. Barring any major errors in design so far, what is the best way to implement DB connection management? I've come up with a couple of options, but there may be others I'm not aware of: Introduce a new Broker component between the DBServer and the other components and let it handle the DB connections. The upside is that all the connections can be managed from a single point, which is very convenient. The downside is that now there is an additional "single point of failure" in the system. Other components must go through it for every request that involves DB in some way, which also makes this a bottleneck. Move the DB connection management into BusinessServer and SessionServer components and let each handle its own DB connections. The upside is that there is no additional "single point of failure" or bottleneck components. The downside is that there is also no control over possible conflicts and deadlocks apart from what DBServer itself can provide. What else can be done? FWIW: Technology is .NET, but none of the vendor-specific stacks are used (e.g. no WCF, MSMQ or the like).

    Read the article

  • flash creates more than one http request

    - by MilanAleksic
    We are facing one issue directly connected with our Flash API we've given to a 3rd party flash vendor. To make a long story short, our API basically wraps domain logic on client and creates a single POST request towards the server in JSON format. All will be ok except in combination MacOS + Safari we receive double requests on server (?). Even more interesting, we are receiving different agent names - one is expected name/decriptor of the browser and system, other is "CFNetwork". POST /RuntimeDelegate.ashx - 80 Mozilla/5.0+(Macintosh;+U;+Intel+Mac+OS+X+10_4_11;+fr)+AppleWebKit/531.22.7+(KHTML,+like+Gecko)+Version/4.0.5+Safari/531.22.7 200 0 0 POST /RuntimeDelegate.ashx - 80 CFNetwork/129.24 200 0 0 POST /RuntimeDelegate.ashx - 80 Mozilla/5.0+(Macintosh;+U;+Intel+Mac+OS+X+10_4_11;+fr)+AppleWebKit/531.22.7+(KHTML,+like+Gecko)+Version/4.0.5+Safari/531.22.7 200 0 0 POST /RuntimeDelegate.ashx - 80 Mozilla/5.0+(Macintosh;+U;+Intel+Mac+OS+X+10_4_11;+fr)+AppleWebKit/531.22.7+(KHTML,+like+Gecko)+Version/4.0.5+Safari/531.22.7 200 0 0 POST /RuntimeDelegate.ashx - 80 CFNetwork/129.24 200 0 0 POST /RuntimeDelegate.ashx - 80 Mozilla/5.0+(Macintosh;+U;+Intel+Mac+OS+X+10_4_11;+fr)+AppleWebKit/531.22.7+(KHTML,+like+Gecko)+Version/4.0.5+Safari/531.22.7 200 0 0 POST /RuntimeDelegate.ashx - 80 CFNetwork/129.24 200 0 0 POST /RuntimeDelegate.ashx - 80 CFNetwork/129.24 200 0 0 Has anyone encounter anything like this before?

    Read the article

  • osx rvm ruby 1.8.7 nokogiri 1.4.1 - ERROR: Failed to build gem native extension.

    - by tommasop
    I'm stuck with this problem. cat ~/.rvm/gems/ruby-1.8.7-p249/gems/nokogiri-1.4.1/ext/nokogiri/mkmf.log Gives this errors (clipped) conftest.c:3: error: 'xmlParseDoc' undeclared (first use in this function) conftest.c:3: error: (Each undeclared identifier is reported only once conftest.c:3: error: for each function it appears in.) For several libraries which are found in the system. If I manually go into ~/.rvm/gems/ruby-1.8.7-p249/gems/nokogiri-1.4.1/ext/nokogiri/ And compile and install everything goes fine ruby extconf.rb make make install mkdir -p /Users/tommasop/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/i686-darwin9.8.0/nokogiri /usr/bin/install -c -m 0755 nokogiri.bundle /Users/tommasop/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/i686-darwin9.8.0/nokogiri except that on script/server: ? script/server --debugger => Booting Mongrel => Rails 2.3.5 application starting on http://0.0.0.0:3000 ./script/../config/../vendor/rails/railties/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement The following gems have native components that need to be built nokogiri You're running: ruby 1.8.7.249 at /Users/tommasop/.rvm/rubies/ruby-1.8.7-p249/bin/ruby rubygems 1.3.6 at /Users/tommasop/.rvm/gems/ruby-1.8.7-p249, /Users/tommasop/.rvm/gems/ruby-1.8.7-p249%global Run `rake gems:build` to build the unbuilt gems. Any help greatly appreciated!

    Read the article

  • Loader.php trying to load Doctrine classes, but we use Propel!

    - by kewpiedoll99
    We are finding cases where we get the following 500 error: File xyz.php does not exist or class "xyz" was not found in the file at () in SF_ROOT_DIR/lib/vendor/Zend/Loader.php line 107 ... where xyz == Memcache (when trying to use symfony cc on the command line) or sfDoctrineAdminGenerator (when using an old-ish AdminGenerator-generated CMS page). We use Propel, but Loader.php is trying to load classes used only for Doctrine. Currently I am using a filthy hack where I request Loader.php to check if the file is either of these two cases, and if so simply return rather than trying to load it. Obviously, this is unacceptable longer term. Has anybody encountered this, and how did you solve it? Edited to add: We have: class ProjectConfiguration extends sfProjectConfiguration { public function setup() { // for compatibility / remove and enable only the plugins you want $this->enableAllPluginsExcept(array('sfDoctrinePlugin')); } } And we have a propel.ini file in our top level config directory. This has only started in the past four weeks or so, and we've had a stable build for over a year now. I'm pretty sure Doctrine is totally disabled.

    Read the article

  • how to get jquery.couch.app.js to work with IE8

    - by fuzzy lollipop
    I have tested this on Windows XP SP3 and Windows 7 Ultimate in IE7 and IE8 (in all compatiblity modes) and it fails the same way on both. I am running the latest HEAD from the the couchapp repository. This works fine on my OSX 10.6.3 development machine. I have tested with Chrome 4.1.249.1064 (45376) and Firefox 3.6 and they both work fine. As do the Safari 4 and Firefox 3.6 on OSX 10.6.3 Here is the error message Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0) Timestamp: Wed, 28 Apr 2010 03:32:55 UTC Message: Object doesn't support this property or method Line: 159 Char: 7 Code: 0 URI: http://192.168.0.105:5984/test/_design/test/vendor/couchapp/jquery.couch.app.js and here is the "offending" bit of code, which works on Chrome, Firefox and Safari just fine. If says the failure is on the like that qs.forEach() from the file jquery.couch.app.js 157 var qs = document.location.search.replace(/^\?/,'').split('&'); 158 var q = {}; 159 qs.forEach(function(param) { 160 var ps = param.split('='); 161 var k = decodeURIComponent(ps[0]); 162 var v = decodeURIComponent(ps[1]); 163 if (["startkey", "endkey", "key"].indexOf(k) != -1) { 164 q[k] = JSON.parse(v); 165 } else { 166 q[k] = v; 167 } 168 });

    Read the article

  • Python: How can I use Twisted as the transport for SUDS?

    - by jathanism
    I have a project that is based on Twisted used to communicate with network devices and I am adding support for a new vendor (Citrix NetScaler) whose API is SOAP. Unfortunately the support for SOAP in Twisted still relies on SOAPpy, which is badly out of date. In fact as of this question (I just checked), twisted.web.soap itself hasn't even been updated in 21 months! I would like to ask if anyone has any experience they would be willing to share with utilizing Twisted's superb asynchronous transport functionality with SUDS. It seems like plugging in a custom Twisted transport would be a natural fit in SUDS' Client.options.transport, I'm just having a hard time wrapping my head around it. I did come up with a way to call the SOAP method with SUDS asynchronously by utilizing twisted.internet.threads.deferToThread(), but this feels like a hack to me. Here is an example of what I've done, to give you an idea: # netscaler is a module I wrote using suds to interface with NetScaler SOAP # Source: http://bitbucket.org/jathanism/netscaler-api/src import netscaler import os import sys from twisted.internet import reactor, defer, threads # netscaler.API is the class that sets up the suds.client.Client object host = 'netscaler.local' username = password = 'nsroot' wsdl_url = 'file://' + os.path.join(os.getcwd(), 'NSUserAdmin.wsdl') api = netscaler.API(host, username=username, password=password, wsdl_url=wsdl_url) results = [] errors = [] def handleResult(result): print '\tgot result: %s' % (result,) results.append(result) def handleError(err): sys.stderr.write('\tgot failure: %s' % (err,)) errors.append(err) # this converts the api.login() call to a Twisted thread. # api.login() should return True and is is equivalent to: # api.service.login(username=self.username, password=self.password) deferred = threads.deferToThread(api.login) deferred.addCallbacks(handleResult, handleError) reactor.run() This works as expected and defers return of the api.login() call until it is complete, instead of blocking. But as I said, it doesn't feel right. Thanks in advance for any help, guidance, feedback, criticism, insults, or total solutions.

    Read the article

  • How can I change Rails view code for site visitors using SSL?

    - by pjmorse
    My Rails app has some pages which are SSL-required and others which are SSL-optional. The optional pages use some assets which are served off-site (images from a vendor) which have both http and https URLs. I need to use https when the page is accessed via SSL to avoid the dreaded "this page contains both secure and insecure elements" warning. I've written code to return the image URLs as http by default and https if requested. My problem now is determining in the view how the request came in. request.ssl? doesn't work in views. I've tried using a before_filter which sets something like @ssl_request using request.ssl?, but that also always returns false. Is there a more elegant way to do this? The server stack is Nginx and Passenger. Other apps with Apache = Mongrel stacks pass an X_FORWARDED_PROTO header to tell Rails that SSL is or isn't being used; is it possible that Nginx/Passenger doesn't do this?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >