Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 289/719 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • RMagick transparent_color deprecated? What's the alternative?

    - by user315975
    I'm developing an app that does a fair amount of generating transparent pngs on the fly. These are used as overlays, to show areas of interest in a graphic, so they have to have transparent backgrounds. I am developing in Ruby on Rails, deploying on Heroku. What works fine in development is not working in production. I get this error when I call a drawing routine using RMagick: NotImplementedError (the `transparent_color=' method is not supported by ImageMagick 6.2.4): /usr/local/lib/ruby/gems/1.8/gems/rmagick-1.15.17/lib/RMagick.rb:1691:in `transparent_color=' I'm using RMagick version 2.12.1 on the development machine, but I'm not exactly certain how to discover the version of ImageMagick that it's running, so I'm not sure if this is a case of my local code being behind or ahead. I'm hoping behind, because perhaps then there'll be a replacement for this call. Does anyone know what the fix is here? What's required to generate a transparent background, if not the call I'm using? I can't find this in the documentation: in fact, it was on a third-party site that I found mention of this capability.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • Using custom detectors with FindBugs Maven plugin

    - by Lóránt Pintér
    I have a nice JAR of some custom FindBugs detectors I'd like to use with the FindBugs Maven plugin. There is a way to do this with the plugin via the <pluginList> configuration parameter, but that only accepts local files, URLs, or resources. The only way I found for doing so is to somehow copy my JAR to a local file (maybe via the Dependency plugin) and then configure the FindBugs plugin something like this: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId> <version>2.3.1</version> <configuration> <pluginList>${project.build.directory}/my-detectors.jar</pluginList> </configuration> </plugin> But this is not very flexible. Is there a way to use Maven's dependency management features together with FindBugs' plugins? I'd like to use something like this: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId> <dependencies> <dependency> <groupId>com.lptr.findbugs</groupId> <artifactId>my-detectors</artifactId> <version>1.0</version> </dependency> </dependencies> </plugin> ...but this simply overrides the core FindBugs detectors.

    Read the article

  • How to find an XPath query to Element/Element without namespaces (XmlSerializer, fragment)?

    - by Veksi
    Assume this simple XML fragment in which there may or may not be the xml declaration and has exactly one NodeElement as a root node, followed by exactly one other NodeElement, which may contain an assortment of various number of different kinds of elements. <?xml version="1.0"> <NodeElement xmlns="xyz"> <NodeElement xmlns=""> <SomeElement></SomeElement> </NodeElement> </NodeElement> How could I go about selecting the inner NodeElement and its contents without the namespace? For instance, "//*[local-name()='NodeElement/NodeElement[1]']" (and other variations I've tried) doesn't seem to yield results. As for in general the thing that I'm really trying to accomplish is to Deserialize a fragment of a larger XML document contained in a XmlDocument. Something like the following var doc = new XmlDocument(); doc.LoadXml(File.ReadAllText(@"trickynodefile.xml")); //ReadAllText to avoid Unicode trouble. var n = doc.SelectSingleNode("//*[local-name()='NodeElement/NodeElement[1]']"); using(var reader = XmlReader.Create(new StringReader(n.OuterXml))) { var obj = new XmlSerializer(typeof(NodeElementNodeElement)).Deserialize(reader); I believe I'm missing just the right XPath expression, which seem to be rather elusive. Any help much appreciated!

    Read the article

  • Two "Calendar" entries listed on iPad - can't write to calendar using EventKit

    - by Neal
    My iOS app integrates with the device's calendar. On my iPad when I view the calendar app and tap the Calendars button on the top left to choose which calendars to show I see one entry named "Calendar". In my app when I loop through available calendars per the code below "Calendar" is listed twice. One is CalDAV for the type, the other is Local. I'm unable to create calendar entries in one of them, I believe the "Local" one, not sure why. Why do I see "Calendar" listed twice when I do NOT see it listed twice in the iCal app? public static List<string> Calendars { get { var calendars = new List<string>(); var ekCalendars = EventStore.Calendars; if (ekCalendars != null && ekCalendars.Length > 0) { foreach (EKCalendar cal in ekCalendars) { if (cal.AllowsContentModifications) calendars.Add(cal.Title); } calendars.Sort(); } return calendars; } }

    Read the article

  • Develop multiple very similar projects at once

    - by Raveren
    I am developing a semi-complicated site that is available in several countries at once. Much effort has been put in to make the code bases as similar as possible to one another and ultimately only the config file and some representational data will differ between them. Each project has its own SVN repository which maps directly to a live test site. That part is handled by the IDE we use to work. Now I am in need to create a some sort of system to keep all these projects in sync. The best theoretical solution so far is to create a local hook script that would fire on committing and Merge the committed files from the project that is being committed to all other projects Optionally upload them to the live site, replacing previous files The first problem is that I don't know how I would do the merging - I guess it would be like applying a SVN patch or something. The second is if I do not want to upload the changes to the live server, how would I go about synching the live and local code bases (replace older files?). I am posting this question, not going through the potentially huge trouble of solving the aforementioned problems myself is that I believe this is a pretty common situation and someone would already have a solution and others may benefit from the answers in the future. Lastly, I'm on windows7, develop PHP and use tortoiseSVN.

    Read the article

  • Error in FireFox Loading Images

    - by Brian
    Hello, Details of the app are: ASP.NET project, local web server, hosted in IIS locally, using latest FireFox, uses forms authentication. I'm getting a logon user name/pwd box when trying to access my local web server. Using the net panel in firebug, I see the issue is with an animated GIF, showing up as 401 unauthorized. I check the details and this is what I see for this URL: http://localhost/<virtual>/Images/loading.gif I'm getting the following: Response Headers: Server Microsoft-IIS/5.1 Date Thu, 17 Jun 2010 19:02:58 GMT WWW-Authenticate Negotiate NTLM Connection close Content-Length 4046 Content-Type text/html Request headers: Host localhost User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729; .NET4.0E) Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer <correct referer> Cookie ASP.NET_SessionId=ux4bt345qjz4p3u1wm1zgmez; .ASPXFormsAuth=33F12B444040827B8ABF154EE4EDE43B6CA532432EB846987B355097E00256DF0955C76A37BC593EAA961747BF1CC1D8949FF63C6F2CA69D77213EB15B4EDFAF57A83D9E1F88AB8D821C3A09C07EA2EE; .ASPROLES=qTFrGteJydYAE3118WGXbhJthTDdjdtuQ06t4bYVrM1BwIfcEHU1HhnEcs7TqSOaV-fIN5MH3uO57oNVWXDvrhkZ8gQuURuUk_K0TpoR-DEFXuF953Gl9aIilKAdV211jutMNQmhkt2rdPE2tEhHs3pz953fADxjAOyZl7K-AqNvMk3yqJshhKHhJIf-ALMhWIYlrrKy0WsYznUwh3WCtPfzEBD5XzmXU8HVMJ2-ArLjBISuegvSmxvK1PuXBPhoMRMi9Ynaw6xi9ypGk-R6uN0ljOMCGkB2-20WUlFuP0xWTfac_zCTDT00pbpnyjtygnM-LShOXTrZ_mhoRuXfKYEYSodNihwD6SRr19Nm-8uZ5BQ-W81svM17S2C0vc0FaxtiuAcN_vHcsN1OEJeCuVfRjeqzo9xWEViupP3Vh6aOcCm6yrftgw5x94piuCJO7tCfXjJAw5RVUWDBBWv5gmid171F0k-_XZ0CSv7Gm2Eai1BRfogAqQ_MV3tyPv7XVEyJXRXqYGlf1JpkfTW8S8On4E05v9gx9RcdnKHZebiOZwbP1_ho9nG7pMwXysbhjxtxwZ-zLx-v11_rhZw_i5m7iNcLtt4BbFU-sb_crzMpCKGywHIc452Zp1E0kx1Rfx-2eUnaiLiCfGed-QqelO88NYTpJHttGKEfhFrDgmaIXZPJRtuZ-GrS6t3Vla-8qDAVb1p6ovPwoVT4z4BhQyFsk542gDx-uQDw6D0B6zo7lXfcOjtolUxDcLbETsNlYsexZaxFpRSbw7M1ldwL_k92P9wLPlv9mw4NtyhXKJesMu7GjquZuoBN3hO00AqJEe1tKFFtfrvbE5ZH7uNu7myNdtlxRPe3WZe7qukbqHo1 Pragma no-cache Cache-Control no-cache Any ideas? Thanks.

    Read the article

  • Git rebase and semi-tracked per-developer config files.

    - by dougkiwi
    This is my first SO question and I'm new-ish to Git as well. Background: I am supposed to be the version control guru for Git in my group of about 8 developers. As I don't have a lot of Git experience, this is exciting. I decided we need a shared repository that would be the authoritative master for the production code and the main meeting-point for the development code. As we work for a corporation, we really do need to show an authoritive source for the production code at least. I have instructed the developers to pull-rebase when pulling from the shared repository, then push the commits that they want to share. We have been running into problems with a particular type of file. One of these files, which I currently assume is typical of the problem, is called web.config. We want a version-controlled master web.config for devs to clone, but each dev may make minor edits to this file that they wish to locally save but not share. The problem is this: how do I tell git not to consider local changes or commits to this file to be relevent for rebasing and pushing? Gitignore does not seem to solve the problem, but maybe that's because I put web.config into .gitignore too late? In some simple situations we have stacked local changes, rebased, pushed, and popped the stack, but that doesn't seem to work all of the time. I haven't picked up the pattern quite yet. The published documentation on pull --rebase tends to deal with simplier situations. Or do I have the wrong idea entirely? Are we misusing Git? Dougkiwi

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • Failed at linking C++ [undefined reference boost::filesystem3 ... ]

    - by Pphax
    i'm having some troubles compiling my work, i'm using ubuntu with g++! i get a lot of these messages: undefined reference to `boost::filesystem3::directory_entry::m_get_status(boost::system::error_code*) const' undefined reference to `boost::filesystem3::path::extension() const' undefined reference to `boost::filesystem3::path::filename() const' undefined reference to `boost::filesystem3::path::filename() const' (etc...) I've searched and found maaany answers but none of those work for me. [...] -lboost_system (/usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/libboost_system.so) -lboost_filesystem (/usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/libboost_filesystem.so) [...] (when linking it shows those two libraries, i'm guessing the error is related to the second one. hax@lap:~$ locate libboost_filesystem.so /home/hax/boost_1_47_0/bin.v2/libs/filesystem/build/gcc-4.4.5/release/threading-multi/libboost_filesystem.so.1.47.0 /home/hax/boost_1_47_0/stage/lib/libboost_filesystem.so /home/hax/boost_1_47_0/stage/lib/libboost_filesystem.so.1.47.0 /usr/lib/libboost_filesystem.so /usr/lib/libboost_filesystem.so.1.42.0 /usr/local/lib/libboost_filesystem.so /usr/local/lib/libboost_filesystem.so.1.47.0 this is the related line on my makefile: -L. -L../bncsutil/src/bncsutil/ -L../StormLib/stormlib/ -L../boost/lib/ -lbncsutil -lpthread -ldl -lz -lStorm -lmysqlclient_r -lboost_date_time -lboost_thread -lboost_system -lboost_filesystem -Wl -t I tried pointing with -L several different places where i saw filesystem.so was located but it didn't work! Can anyone see the problem in those lines? if you need me to put some extra data i'll do it, i'm not seeing the problem :( Thanks :)

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • Exited event of Process is not rised?

    - by Kanags.Net
    In my appliation,I am opening an excel sheet to show one of my Excel documents to the user.But before showing the excel I am saving it to a folder in my local machine which in fact will be used for showwing. While the user closes the application I wish to close the opened excel files and delete all the excel files which are present in my local folder.For this, in the logout event I have written code to close all the opened files like shown below, Process[] processes = Process.GetProcessesByName(fileType); foreach (Process p in processes) { IntPtr pFoundWindow = p.MainWindowHandle; if (p.MainWindowTitle.Contains(documentName)) { p.CloseMainWindow(); p.Exited += new EventHandler(p_Exited); } } And in the process exited event I wish to delete the excel file whose process is been exited like shown below void p_Exited(object sender, EventArgs e) { string file = strOriginalPath; if (File.Exists(file)) { //Pdf issue fix FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read); fs.Flush(); fs.Close(); fs.Dispose(); File.Delete(file); } } But the problem is this exited event is not called at all.On the other hand if I delete the file after closing the MainWindow of the process I am getting an exception "File already used by another process". Could any help me on how to achieve my objective or give me an reason why the process exited event is not being called?

    Read the article

  • Reload mod_fcgid without killing Python Service

    - by Tobias
    Hi I'm currently running a Django project on my school's webserver with FCGI. I did follow the multiple guides that recommends installing a virtual local Python environment and it worked out great. The only issue i had was that "touching" my fcgi-file to reload source-files wasn't enough, but instead i had to kill the python service via SSH. This because mod_fcgid is used. However, the admin didn't think it was a great idea that i ran my own local python. He thought it better if i just told him what modules to install on root, which was a pretty nice service really. But doing this, i can no longer kill python since it's under root(though immoral as I am, I've definitely tried). The admins recommendation was that I should try too make the fcgi script reload itself by checking time stamp. I've tried to find documentation on how to do this, but fund very little and since I'm a absolute beginner i have no idea what would work. Anyone have experience running python/django under mod_fcgid or tips on where to find related guides/documentation?

    Read the article

  • Cant fetch production db results using Google app engine remote_api

    - by Alon
    Hey, im trying to work out with /remote_api with a django-patch app engine app i got running. i want to select a few rows from my online production app locally. i cant seem to manage todo so, everything authenticates fine, it doesnt breaks on imports, but when i try to fetch something it just doesnt print anything. Placed the test python inside my local app dir. #!/usr/bin/env python # import os import sys # Hardwire in appengine modules to PYTHONPATH # or use wrapper to do it more elegantly appengine_dirs = ['myworkingpath'] sys.path.extend(appengine_dirs) # Add your models to path my_root_dir = os.path.abspath(os.path.dirname(__file__)) sys.path.insert(0, my_root_dir) from google.appengine.ext import db from google.appengine.ext.remote_api import remote_api_stub import getpass APP_NAME = 'Myappname' os.environ['AUTH_DOMAIN'] = 'gmail.com' os.environ['USER_EMAIL'] = '[email protected]' def auth_func(): return (raw_input('Username:'), getpass.getpass('Password:')) # Use local dev server by passing in as parameter: # servername='localhost:8080' # Otherwise, remote_api assumes you are targeting APP_NAME.appspot.com remote_api_stub.ConfigureRemoteDatastore(APP_NAME, '/remote_api', auth_func) # Do stuff like your code was running on App Engine from channel.models import Channel, Channel2Operator myresults = mymodel.all().fetch(10) for result in myresults: print result.key() it doesnt give any error or print anything. so does the remote_api console example google got. when i print the myresults i get [].

    Read the article

  • Cannot create a new VS data connection in Server Explorer

    - by Seventh Element
    I have a local instance of SQL Server 2008 express edition running on my development PC. I'm trying to create a new data connection through Visual Studio Server Explorer. The steps are the following: Right click the "Data Connections" node = Choose Data Source. I select "Microsoft SQL Server" as the data source. The "Add Connection" dialog window appears. I select my local server instance = "Test connection" works fine. I select "AdventureWorks" as the database name = "Test connection" works fine. Next I hit the "Ok" button = Error message: "This server version is not supported. Only servers up to MS SQL Server 2005 are supported." I'm using Visual Studio 2008 Professional Edition. The target framework of the application is ".NET framework 3.5". I have a reference to System.Data (framework v2.0) and cannot find another version of the assembly on my system. Am I referencing the wrong assembly? How can I fix this problem?

    Read the article

  • C++: IDE Code assistance (Linux)

    - by Martijn Courteaux
    Hi, My IDE (NetBeans) thinks this is wrong code, but it compiles correct: std::cout << "i = " << i << std::endl; std::cout << add(5, 7) << std::endl; std::string test = "Boe"; std::cout << test << std::endl; He always says: unable to resolve identifier .... (.... = cout, endl, string); So I think it has something to do with the code assistance. I think I have to change/add/remove some folders. Currently, I have this include folders: C compiler: /usr/local/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include-fixed /usr/include C++ compiler: /usr/include/c++/4.4.3 /usr/include/c++/4.4.3/i486-linux-gnu /usr/include/c++/4.4.3/backward /usr/local/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include /usr/include Thanks

    Read the article

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • Cross-platform build UNC share (Windows->Linux) - possible to be case-sensitive on CIFS share?

    - by holtavolt
    To optimize builds between Windows and Linux (Ubuntu 10.04), I've got a UNC share of the source tree that is shared between systems, and all build output goes to local disk on each system. This mostly works great, as source updates and changes can quickly be tested on both systems, but there's one annoying limitation I can't find a way around, which is that the Linux CIFS mount is case-insensitive. Consequently, a test compile of code that has an error like: #include "Foo.h" for a file foo.h, will not be caught by a test build (until a local compile is done on the Linux box, e.g. nightly builds) Is it possible to have case-sensitivity of the Windows UNC share on the Linux box? I've tried a variety of fstab and mount combinations with no success, as well as editing the smb.config to set "case sensitive = yes" Given what the Ubuntu man page info states on this: nocase Request case insensitive path name matching (case sensitive is the default if the server suports it). I suspect that this is a limitation from the Windows UNC side, and there's nothing to be done short of switching to some other mechanism (is NFS still viable anywhere?) If anyone has already solved this to support optimized cross-platform build environments, I'd appreciate hearing about it!

    Read the article

  • Complicated API issue with calling assemblies dynamically?

    - by Stefanos Tses
    I have an interesting challenge that I'm wondering if anyone here can give me some direction. I'm writing a .Net windows forms application that runs on a network and uses an SQL Server to save and pull data. I want to offer a mini "plugin" API, where developers can build their own assemblies and implement a specific interface (IDataManipulate). These assemblies then can be used by my application to call the interface functions and do something. I can create assemblies using my API, copy the file to a folder in my local hard drive and configure my application to use Reflection to call a specific function from the implemented interface (IDataManipulate.Execute). The problem: Since the application will be installed in multiple workstations in the network, is impossible to copy the plugin dlls the users will create to each machine. Solutions I tried: Solution 1 Copy the API dll to a network share. Problem: Requires AllowPartiallyTrustedCallersAttribute, which requires .Net singing, which I can't force from my users. Solution 2 (preferred) Serialize the dll object, save it to the database, deserialize it and call IDataManipulate.Execute. Problem: After deserialization, I try cast it to a IDataManipulate object but returns an error looking for the actual dll file. Solution 3 Save the dll bytes as byte[] to the database and recreate the dll at the local PC every time the user starts my application. Problem: Dll may have dependencies, which I don't know if I can detect. Any suggestions will be greatly appreciated. Thanks

    Read the article

  • Problem in displaying image in FireFox

    - by Param-Ganak
    Hello friends, I have a JSP page on which I have a div tag in which there is a a IMG tag. Using this IMG tag I want to show an image in it. Here the source path of an image is comes from database so I assigned a JSP variable using JSP scriplet. This JSP variable have the source path of an image. This path of image may be of different machine or of same machine i.e. images are stored on different machine or on same machine i.e. on local machine on different drive. The problem is that how to give path of image stored on different machine as well as on same machine. I have tried different ways like by giving ip address of that machine. Here is the path that of local machine where the image is stored img src= file:\localhost\D:\ScannedSheets\testproj/batch1/IMG001.jpg style="z-index:1; position:absolute; top:0; left:0; width:850; height:1099" With this syntax the image is visible in Internet Explorer but With the same syntax Its not visible in FireFox, Google Chrome etc. Please Guide me in friends. Also tell me that how to give path of the image stored on different machine which works In Internet Explorer, FireFox, Google chrome etc.

    Read the article

  • AJAX/JSONP Question. Access id denied using IE while requesting corss domain.

    - by Sisir
    Ok, Here we go. I have already searched the Stack for the answer i have found some useful info but i want to clear up some more things. I also search the net for the answer but no real help. I have worked with some api (yelp, ouside.in). In yelp i use to inject the script to head with the url request to the api with a callback funcion. I worked fine in all browsers. But while using outside.in api when i call the url the callback in not working. In yelp they have a url field can be used like that callback=callbackfuncion so the callback will automatically called. But in outside.in there is not such field available. Is there are any standard command for callback function which will work regardless of any server/api? I also tried a standard ajax request using jQuery $.ajax() function. It worked for my local pc for both IE and other browser but did not working in IE showing the error: access denied, other borwser seems ok. Firebug in my FF also don't notice any errors. Outside.in has an javascript example but it is too hard to me to understand github.com/outsidein/api-examples/tree/master/javascript/browser/ site i am working: http://citystir.com yelp: yelp.com outside.in: outside.in Techniqual info: i am using: wampserver in local, wordpress for hosting, Godaddy, apache for remote with linux. Codes: Using Jquery $.ajax url is like: "http://hyperlocal-api.outside.in/v1.1/states/Illinois/cities/chicago/stories?dev_key="+key+"&sig="+signeture+"&limit=3 function makeOutsideRequest(url){ $.ajax({ url: url, dataType: 'json', type: 'GET', success: function (data, status, xhr) { if (data == null) { alert("An error occurred connecting to " + url + ". Please ensure that the server is running and configured to allow cross-origin requests."); }else{ printHomeNews(data); } }, error: function (xhr, status, error) { alert("An error occurred - check the server log for a stack trace."); } }); } Thanks!

    Read the article

  • How to retrieve content via .load() or $.get() with this line

    - by Sin
    hello :) I posted a question a day or two ago about how to retrieve php via ajax method in this modal I was using. I kinda found out the right way to go about it, but there's still something I'm not doing right (obviously lol) Here's the section thats giving me the issues: jQuery('div that holds content').fadeIn(200).css({ 'width': Number( popWidth ) }); $('').load('/something/somewhere/this #content'); So, im using safari, and a local server (mamp), when I check activity in my browser, it shows that it is loading the content with every click, AND the pop up pops up, but no content. When I simply retrieve content via hidden div, ofcourse, i get it. This is what I'm trying to avoid. right now I have that div in my footer stashed as hidden. I'd rather just make a call when its needed, instead of loading it every single time a page is accessed. you can go here to see the whole script i posted in my last question: How to use ajax to show php in a modal pop up Anyone have any idea? I read that .load() has the ability to grab specific content from a request, but im not sure the major difference between that and $.get() I've tried both, and I get the same results. Im using wordpress, and wordpress's ajax requests run smooth as ever, so I know its not a local problem, i'ts my coding lol Ok....Im done typing :)

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • Exchange 2003 inbound routing issue

    - by user565712
    Just recently we started experiencing inbound routing issues. Email adddressed to [email protected] is intermittantly translated to [email protected]. This is happening for several users and, as stated, is intermittant. I don't know where to start looking for the solution. Is this an Exchange issue? A DNS issue? We have a single Exchange server inside our network with an FQDN of server.domain.local with a single SMTP Virtual Server. The Advanced properties of the Delivery tab of the Virt Server has an empty Masquerade Domain textbox and the value for the FDQN text-box is set to the domain itself, domain.com. The DNS record for domain.com is a CNAME entry referencing www.domain.com. Is this somehow related to the problem? I checked the headers of the inbound messages that generated NDRs as a result of being sent to [email protected] and nowhere in the header is www.domain.com mentioned. To make my life even more difficult, we use Postini as a third-party SPAM filtering service. Our MX records point to the Postini servers and Postini delivers the messages to our server. Perhaps it is Postini that is mucking things up? sigh I'm having trouble with this one and the intermittent aspect is making it that much more difficult for me. Any ideas?

    Read the article

  • Updating iOS application content which include images

    - by azamsharp
    I am working on a Vegetable gardening application. Apart from the vegetable name and description I also have vegetable image. Currently, I have all the images in the Supported Files folder in the Xcode project. But later on I want to update the application dynamically without having the user download a new version. When the user updates the application or downloads new data from the server that data will include the images. Can I store those images in the supporting file folder or somewhere where they can be references by just the name. RELATED QUESTION: I will also allow the user to take pictures of their vegetables and then write notes about the vegetables like "just planted", "about to harvest" etc. What is the recommended approach for storing pictures/photos. I can always store them in the user's photo library and then store the reference in the local database and then fetch and display the picture using the reference. The problem with that approach might be that if the user accidentally deletes the picture from the library then it will no longer be displayed in my application. The only way I see if to store the picture in the app local database as a BLOB.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >