Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 690/785 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • Is there something wrong with this code? AMFReader vs AMFWriter

    - by Triynko
    Something doesn't seem right about the source code for Flash Remoting's Date(AS3) <- DateTime(.NET) stream reader/writer methods, when it comes to handling UTC <- Local times. It seems to write the DateTime data fine, including a 64-bit representation as milliseconds elapsed since Jan 1, 1970, as well as a UTC offset. public void WriteDateTime(DateTime d) { this.BaseStream.WriteByte(11); DateTime time = new DateTime(0x7b2, 1, 1); long totalMilliseconds = (long)d.Subtract(time).TotalMilliseconds; long l = BitConverter.DoubleToInt64Bits((double)totalMilliseconds); this.WriteLong(l); int hours = TimeZone.CurrentTimeZone.GetUtcOffset(DateTime.Today).Hours; this.WriteShort(hours); } But when the data is read... it seems to be ignoring the short UTC offset value that was written, and appears to just discard it! private DateTime ReadDateValue() { long num2 = (long)this.ReadDouble(); DateTime time2 = new DateTime(0x7b2, 1, 1).AddMilliseconds((double)num2); int num3 = this.ReadInt16() / 60; //num3 is not used for anything! return time2; } Can anyone make sense of this? I also found some similar source code for AMFReader here, which has a ReadDateTime method, that seems to do something very similar... but goes on to use the UTC offset for something.

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • php sessions not working

    - by Elwhis
    Hey guys, I have a problem, tried to google some sollutions but without success. I am working with wamp2.0 - PHP 5.3, apache 2.2.11 but my sessions are not storing data. I have a page that accepts a parameter, which (simplified version) I wanna store in a session, so I when I come to www.example.com/home.php?sessid=db_session_id the script looks like: session_start(); $sessid = @$_GET['sessid']; if ($sessid) { $_SESSION['sessid'] = $sessid; } var_dump($_SESSION); and outputs: array(1) { [0]=> string(13) "db_session_id" } which is fine, but then, when I go to www.example.com/home.php (without the sessid parameter) the $_SESSION array is empty. I've event tried to comment the $_SESSION['sessid'] = $sessid; line before going to the page without the parameter, but still it didin't work. I've checked the session_id() output and the session id remains the same. Session settings from phpinfo() Session Support enabled Registered save handlers files user Registered serializer handlers php php_binary wddx Directive Local Value Master Value session.auto_start Off Off session.bug_compat_42 On On session.bug_compat_warn On On session.cache_expire 180 180 session.cache_limiter nocache nocache session.cookie_domain no value no value session.cookie_httponly Off Off session.cookie_lifetime 0 0 session.cookie_path / / session.cookie_secure Off Off session.entropy_file no value no value session.entropy_length 0 0 session.gc_divisor 1000 1000 session.gc_maxlifetime 1440 1440 session.gc_probability 1 1 session.hash_bits_per_character 5 5 session.hash_function 0 0 session.name PHPSESSID PHPSESSID session.referer_check no value no value session.save_handler files files session.save_path c:/wamp/tmp c:/wamp/tmp session.serialize_handler php php session.use_cookies On On session.use_only_cookies On On session.use_trans_sid 0 0 EDIT: $_SESSION and $_COOKIE var dumps right after session_start() Session: array(1) { ["sessid"]=> string(0) "" } Cookie: array(6) { ["ZONEuser"]=> string(10) "3974260089" ["PHPSESSID"]=> string(26) "qhii6udt0cghm4mqilctfk3t44" ["__utmz"]=> string(91) "1.1294313834.54.3.utmcsr=u.cz|utmccn=(referral)|utmcmd=referral|utmcct=/registered/packages" ["__utma"]=> string(48) "1.1931776919.1287349233.1294266869.1294313834.54" ["__utmc"]=> string(1) "1" ["__utmb"]=> string(18) "1.49.10.1294313834" }

    Read the article

  • Visual Studio 2010 / ASP.NET MVC / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • How to parse a custom XML-style error code response from a website

    - by user1870127
    I'm developing a program that queries and prints out open data from the local transit authority, which is returned in the form of an XML response. Normally, when there are buses scheduled to run in the next few hours (and in other typical situations), the XML response generated by the page is handled correctly by the java.net.URLConnection.getInputStream() function, and I am able to print the individual results afterwards. The problem is when the buses are NOT running, or when some other problem with my queries develops after it is sent to the transit authority's web server. When the authority developed their service, they came up with their own unique error response codes, which are also sent as XMLs. For example, one of these error messages might look like this: <Error xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Code>3005</Code> <Message>Sorry, no stop estimates found for given values.</Message> </Error> (This code and similar is all that I receive from the transit authority in such situations.) However, it appears that URLConnection.getInputStream() and some of its siblings are unable to interpret this custom code as a "valid" response that I can handle and print out as an error message. Instead, they give me a more generic HTTP/1.1 404 Not Found error. This problem cascades into my program which then prints out a java.io.FileNotFoundException error pointing to the offending input stream. My question is therefore two-fold: 1. Is there a way to retrieve, parse, and print a custom XML-formatted error code sent by a web service using the plugins that are available in Java? 2. If the above is not possible, what other tools should I use or develop to handle such custom codes as described?

    Read the article

  • check whether mmap'ed address is correct

    - by reddot
    I'm writing a high-loaded daemon that should be run on the FreeBSD 8.0 and on Linux as well. The main purpose of daemon is to pass files that are requested by their identifier. Identifier is converted into local filename/file size via request to db. And then I use sequential mmap() calls to pass file blocks with send(). However sometimes there are mismatch of filesize in db and filesize on filesystem (realsize < size in db). In this situation I've sent all real data blocks and when next data block is mapped -- mmap returns no errors, just usual address (I've checked errno variable also, it's equal to zero after mmap). And when daemon tries to send this block it gets Segmentation Fault. (This behaviour is guarantedly issued on FreeBSD 8.0 amd64) I was using safe check before open to ensure size with stat() call. However real life shows to me that segfault still can be raised in rare situtaions. So, my question is there a way to check whether pointer is accessible before dereferencing it? When I've opened core in gdb, gdb says that given address is out of bound. Probably there is another solution somebody can propose.

    Read the article

  • wpf error template - red box still visible on collapse of an expander

    - by Andy Clarke
    Hi, I'm doing some validation on the DataSource of TextBox that's within an Expander and have found that once a validation error has been triggered, if I collapse the Expander, the red box stays where the TextBox would have been. <Expander Header="Blah Blah Blah"> <TextBox Name="TextBox" Validation.ErrorTemplate="{DynamicResource TextBoxErrorTemplate}" Text="{Binding Path=Blah, UpdateSourceTrigger=PropertyChanged, ValidatesOnDataErrors=True}" /> </Expander> I've tried to get round this by binding the visibility of the Error Template to the Expander, however I think there's something wrong with the binding. <local:NotVisibleConverter x:Key="NotVisibleConverter" /> <ControlTemplate x:Key="TextBoxErrorTemplate"> <DockPanel> <Border BorderBrush="Red" BorderThickness="2" Visibility="{Binding Path=IsExpanded, Converter={StaticResource NotVisibleConverter}, RelativeSource={RelativeSource AncestorType=Expander}}" > <AdornedElementPlaceholder Name="MyAdorner" /> </Border> </DockPanel> <ControlTemplate.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> I guess I've gone wrong with my binding, can someone put me back on track please? Alternatively does anyone know another solution to the ErrorTemplate still being visible on the collapse of an Expander?

    Read the article

  • Web application date time localization best practice at 201x

    - by Hieu Lam
    Hi all, I have worked for various web projects but correct date time localization have not been done and considered throroughly so I want to ask this very typical problem here and I want to hear comments from expert in this problem What is the correct strategy for storing a date/time value from client from server As I understand, because of locale and timezone so we have to do the conversion, I have heard about GMT or UTC time and after do some search it seems that UTC is more accurate ? so we will convert from client time - UTC+0 when saving and when we read the value from server to client, we convert from server time back to client time again ? However, I see in some website, at the bottom have the sentence "All times are in UTC", "All times are in GMT" and also "All times are in your local time". So maybe not all the sites do the convertion back and forth ? And in that case the user has to manually do the date/time conversion ? How to display the date/time convenient to user based on his locale and region How to provide personalization on date/time value ? I had one time depends on vbscript to do the display and the format is read from windows regional and format settings automatically. But without vbscript how can we determine a date/time pattern for a user of a specific locale. Do we have to store a mapping between a locale and pattern somewhere and do the conversion at the server side ? Although date/time conversion is needed in most case, there's situation where only date matter for example if my birthday is 2 Feb 1980, it should be the same for all locale and no conversion should be done. How can we address this issue.

    Read the article

  • curl: downloading from dynamic url

    - by adam n
    I'm trying to download an html file with curl in bash. Like this site: http://www.registrar.ucla.edu/schedule/detselect.aspx?termsel=10S&subareasel=PHYSICS&idxcrs=0001B+++ When I download it manually, it works fine. However, when i try and run my script through crontab, the output html file is very small and just says "Object moved to here." with a broken link. Does this have something to do with the sparse environment the crontab commands run it? I found this question: http://stackoverflow.com/questions/1279340/php-ssl-curl-object-moved-error but i'm using bash, not php. What are the equivalent command line options or variables to set to fix this problem in bash? (I want to do this with curl, not wget) Edit: well, sometimes downloading the file manually (via interactive shell) works, but sometimes it doesn't (I still get the "Object moved here" message). So it may not be a a specifically be a problem with cron's environment, but with curl itself. the cron entry: * * * * * ~/.class/test.sh >> ~/.class/test_out 2>&1 test.sh: #! /bin/bash PATH=/usr/local/bin:/usr/bin:/bin:/sbin cd ~/.class course="physics 1b" url="http://www.registrar.ucla.edu/schedule/detselect.aspx?termsel=10S<URL>subareasel=PHYSICS<URL>idxcrs=0001B+++" curl "$url" -sLo "$course".html --max-redirs 5 As I was searching around on google, someone suggested that the problem might happen because there are parameters in the url. (Because it is a dynamic url?)

    Read the article

  • Help using RDA on a Desktop Applicaton?

    - by Joel
    I have a .NET 3.5 Compact Framework project that uses RDA for moving data between its mobile device's local SqlCe database and a remote MSSql-2008 server(it uses RDA Push and Pull). The server machine a virtual directory with sqlcesa35.dll (v3.5.5386.0) setup for RDA. We usually install these cabs on the mobile devices and the RDA process does not have any problems: sqlce.wce5.armv4i.cab sqlce.repl.wce5.armv4i.cab Now I am trying to run this application as a desktop application. RDA Pull (download) has been working well. But the RDA Push (upload) is giving me some problems. This is the exception that I get on the desktop application when I try to use RDA Push: System.Data.SqlServerCe.SqlCeException The Client Agent and Server Agent component versions are incompatible. The compatible versions are: Client Agent versions 3.0 and 3.5 with Server Agent versions 3.5 and Client Agent version 3.5 with Server Agent version 3.5. Re-install the replication components with the matching versions for client and server agents. [ 35,30,Client Agent version = ,Server Agent version = ] I have tried copying the file C:\Program Files\Microsoft SQL Server Compact Edition\v3.5\Desktop\SqlServerCe.dll (v3.5.5692.0) to bin\debug I have also tried copying another version of SqlServerCe.dll (v3.0.5206.0) to bin\debug. But this just gives me a slightly different exception: System.Data.SqlServerCe.SqlCeException [ 35,30 ] Is there a different setup or any different dlls that I need to use? Thanks, -Joel

    Read the article

  • My perl script is acting funny...

    - by TheGNUGuy
    I am trying to make a jabber bot from scratch and my script is acting funny. I was originally developing the bot on a remote CentOS box, but I have switched to a local Win7 machine. Right now I'm using ActiveState Perl and I'm using Eclipse with the perl plugin to run a debug the script. The funny behavior I'm experiencing occurs when I run or debug the script. If I run the script using the debugger it works fine, meaning I can send messages to the bot and it can send messages to me. However when I just execute the script normally the bot sends the successful connection message then it disconnects from my jabber server and the script ends. I'm a novice when it comes to perl and I can't figure out what I'm doing wrong. My guess is it has something to do with the subroutines and sending the presence of the bot. (I know for sure that it has something to do with sending the bot's presence because if the presence code is removed, the script behaves as expected except the bot doesn't appear to be online.) If anyone can help me with this that would be great. I originally had everything in 1 file but separated them into several trying to figure out my problem here are the pastebin links to my source code. jabberBot.pl: http://pastebin.com/cVifv0mm chatRoutine.pm: http://pastebin.com/JXmMT7av trimSpaces.pm: http://pastebin.com/SkeuWtu1 Thanks again for any help!

    Read the article

  • Celery tasks not works with gevent

    - by Novarg
    When i use celery + gevent for tasks that uses subprocess module i'm getting following stacktrace: Traceback (most recent call last): File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task R = retval = fun(*args, **kwargs) File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__ return self.run(*args, **kwargs) File "/home/webapp/admin/webadmin/apps/loggingquarantine/tasks.py", line 107, in release_mail_task res = call_external_script(popen_obj.communicate) File "/home/webapp/admin/webadmin/apps/core/helpers.py", line 42, in call_external_script return func_to_call(*args, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 740, in communicate return self._communicate(input) File "/usr/lib64/python2.7/subprocess.py", line 1257, in _communicate stdout, stderr = self._communicate_with_poll(input) File "/usr/lib64/python2.7/subprocess.py", line 1287, in _communicate_with_poll poller = select.poll() AttributeError: 'module' object has no attribute 'poll' My manage.py looks following (doing monkeypatch there): #!/usr/bin/env python from gevent import monkey import sys import os if __name__ == "__main__": if not 'celery' in sys.argv: monkey.patch_all() os.environ.setdefault("DJANGO_SETTINGS_MODULE", "webadmin.settings") from django.core.management import execute_from_command_line sys.path.append(".") execute_from_command_line(sys.argv) Is there a reason why celery tasks act like it wasn't patched properly? p.s. strange thing that my local setup on Macos works fine while i getting such exceptions under Centos (all package versions are the same, init and config scripts too)

    Read the article

  • Optimizing quality for available bandwidth in Flash/RTMFP

    - by Artem M.
    I'm developing a simple one-on-one P2P video chat using ActionScript, and I'd like to ensure the best video quality for the peers given their bandwidth. This means: Setting the best quality given the available bandwidth when the chat starts Responding to network congestions during chat by decreasing the quality. The task is similar to dynamic stream switching, but P2P has its specifics that make dynamic streaming approaches not work. For example, the maxBytesPerSecond metric monitored in dynamic stream switching is pretty useless in P2P where the receiving NetStream's buffer size is set to 0 to minimize latency. So far, it looks like the most reliable QoS metric for P2P is SRTT. In my simulated tests on a local network, a bandwidth congestion makes it shot up to 500 ms and more when there's a bandwidth limit introduced. However, it gives no hint as to how best adjust the value for bandwidth in Camera.setQuality(0, bandwidth) to respond to the congestion. I've done lots of experiments, and I still don't see a clear and simple solution to the problem. I'm also wondering how this issue is addressed (if at all) in other RTMFP chat solutions.

    Read the article

  • Asp.Net Login Control very slow initial connection to Non-Trusted AD Domain

    - by Eric Brown - Cal
    ASP.NET Login control is very slow making the initial connection to AD when authenticating to a different domain than the domain the web server is a member of. Problem occurs for the IIS server and when using with the Visual Studio's built in web server. It takes about 30 seconds the first time when attempting to use the control to connect against another domain. There is no trust relationship bewteen the web server's domain and the other domains (attempted connecting to several different domains). Subsequent connections execute quickly until the connection times out. Using Systernals Process Monitor to troubleshoot, there are two OpenQuery operations right before the delay to "C:\WINDOWS\asembly\GAC_MSIL\System.DirectoryServices\2.0.0.0_b03f5f7f11d50a3a\Netapi32.dll with a result NAME NOT FOUND" and right after the 30 second delay the TCP Send and TCP Recieves indicate communication begins with the AD server. Things we have tried: Impersonating an administrator on the web server in the web.config; Granting permissions to the CryptoKeys to the NetworkService and ASPNET; Specifying by IP instead of DNS name; Multiple variations of specifying the name and ldap server with domains and OU's; Local host entries; Looked for ports being blocked (SYN_SENT) with netstat -an. Nslookup resolves all the domains and systems involved correectly. TraceRt shows the Correct routes Any Idea or hints are greately appreicated.

    Read the article

  • Git force complete sync to master

    - by Jesse
    My workplace uses Subversion for source control so I have been playing around with git-svn for the advantages of my own branches, commit as often as I want without touching the main repo, etc. Since my git svn checkout is local, I have cloned it to a network share as well to act as a backup. My thinking is that if my desktop takes a dump I will at least have the repo on the network share to get changes that I have not had a chance to dcommit yet. My workflow is to work from the desktop, make changes, commit, etc. At the end of the day I want to update the repo on the network share with all of my current changes. I had setup the repo on the network share using git clone repo_on_my_desktop and then updating the repo on the network share with git pull origin master. The problem that I am running into is when I used do a git rebase to squish multiple commits prior to dcommitting to the main svn repository. When I do this, I get merge conflicts on the repo on the network share when I try to backup at night. Is there a way to simply sync entirely with the repository on my desktop without doing a new git clone each night?

    Read the article

  • Passing an ActionScript JPG Byte Array to Javscript (and eventually to PHP)

    - by Gus
    Our web application has a feature which uses Flash (AS3) to take photos using the user's web cam, then passes the resulting byte array to PHP where it is reconstructed and saved on the server. However, we need to be able to take this web application offline, and we have chosen Gears to do so. The user takes the app offline, performs his tasks, then when he's reconnected to the server, we "sync" the data back with our central database. We don't have PHP to interact with Flash anymore, but we still need to allow users to take and save photos. We don't know how to save a JPG that Flash creates in a local database. Our hope was that we could save the byte array, a serialized string, or somehow actually persist the object itself, then pass it back to either PHP or Flash (and then PHP) to recreate the JPG. We have tried: - passing the byte array to Javascript instead of PHP, but javascript doesn't seem to be able to do anything with it (the object seems to be stripped of its methods) - stringifying the byte array in Flash, and then passing it to Javascript, but we always get the same string: ÿØÿà Now we are thinking of serializing the string in Flash, passing it to Javascript, then on the return route, passing that string back to Flash which will then pass it to PHP to be reconstructed as a JPG. (whew). Since no one on our team has extensive Flash background, we're a bit lost. Is serialization the way to go? Is there a more realistic way to do this? Does anyone have any experience with this sort of thing? Perhaps we can build a javascript class that is the same as the byte array class in AS?

    Read the article

  • Glassfish, railo and coldbox - messed up links?

    - by mrt181
    I am new to ColdFusion and ColdBox (and programming). I tried to setup ColdBox but some of the links in the sample applications are broken. My configuration is a GlassFish v3 installation with the current Railo OSS. I access my site through Apache 2.2.14. So instead of http://127.0.0.1:8080/railo/ I access my environment trough http://railo/. In Railo I have a webroot mapping / to C:/webapps/myproject/. I have copied the current ColdBox 3M4 to C:/webapps/myproject/coldbox. I can access the dashboard through http://railo/coldbox/dashboard/index.cfm and have access to all options. My problems start the moment I try to open the sample gallery: HTTP Status 500 - type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception java.io.FileNotFoundException: C:\webapps\viss-dev\coldbox\samples (Zugriff verweigert) note The full stack traces of the exception and its root causes are available in the GlassFish v3 logs. GlassFish v3 OK, no problem, just enter the link directly: http://railo/coldbox/samples/index.cfm. The site looks plain, who cares - BUT all local links look like this: http://127.0.0.1:8080/coldbox/samples/applications/helloworld/index.cfm (railo is replaced with 127.0.0.1:8080). Looks like trouble. To make my confusion perfect: when I try to access the login app: http://railo/coldbox/samples/applications/sampleloginapp/index.cfm and hit the submit button, I am redirected to this address: http://railo/railo/coldbox/samples/applications/sampleloginapp/index.cfm. I believe that this is not really ColdBox-related, but it manifests itself when I try to use ColdBox, so here I am. P.S.: amazon.de takes too long to ship the ColdBox book :(

    Read the article

  • svnserve not strictly required?

    - by Kev
    I was reading the Red Bean book and noticed this paragraph: Do not be seduced by the simple idea of having all of your users access a repository directly via file:// URLs. Even if the repository is readily available to everyone via a network share, this is a bad idea. It removes any layers of protection between the users and the repository: users can accidentally (or intentionally) corrupt the repository database, it becomes hard to take the repository offline for inspection or upgrade, and it can lead to a mess of file permission problems (see the section called “Supporting Multiple Repository Access Methods”). Note that this is also one of the reasons we warn against accessing repositories via svn+ssh:// URLs—from a security standpoint, it's effectively the same as local users accessing via file://, and it can entail all the same problems if the administrator isn't careful. I realized that, since I'm the only one accessing the repository, ever, none of these caveats seem to apply. Can I safely down svnserve then and only ever have to worry about upgrading my TortoiseSVN client, not both the client and the server whenever there's a new version out? (I've tried it already--just needed to use the Relocate feature to switch from svn:// to file://--but I wanted to make sure something wouldn't be sneaking up on me if I left it this way.)

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • are there java based auto-updating tools other than WebStart & Eclipse P2?

    - by DaddyB
    Hi, I am working on a java based application and we are looking to ease our deployment of updates. Up until now, we've always simply sent out new install packs & had the sysadmin's on our customer sites roll out the upgrades - painful for a large number of users. what I'd like to do is something similar to java webstart (or eclipse p2) - when the application starts, it checks for updates in a specified location and then downloads the updates prior to starting. But here's my problem - I want more control over what's done outside of the scope of plugins & jar files. For example: I'd like to be able to upate my JVM (we ship a modified version with additional security features). I need to install DLL's - possibly local to the jar files, sometimes to windows Occasiontally run MSI's to install windows components (e.g. printer drivers). I need to modify config files & the registry. I have found a few applications that support this (such as AppLifeUpdate at http://www.kineticjump.com/) but they tend to be .NET focused and it seems a bit perverse to introduce a .NET dependancy on a java application ;) I know I could write my own here, but if there is already a 3rd party library out there that supports this kind of facility, then it would make my life a lot easier. So, has anyone else had a similar problem & knows of some products I could look at? Thanks, Brian.

    Read the article

  • Unwanted Shell expansion when assigning the output of a shell command to a variable

    - by Rob Goodwin
    I am exporting a portion of a local prototypte svn repository to import into a different repo. We have a number of svn properties set throughout the repo so I figured I would write a script to list the file elements and their corresponding properties. How hard can that be right. So I write started writing a bash script that would assign the output of the svn proplist -v to a variable so I could check if the specified file had any properties. #!/bin/bash svn proplist -v $1 o=$(svn proplist -v "$1") echo $o now this works fine and echos the output of the svn proplist command. But if the proplist command returns something like svn:ignore : * build it performs a shell expansion on the * and inserts the entire directory listing prior to the build property value. So if the directory had a.txt, b.txt and build files/dirs in it, the output would look like. svn:ignore a.txt b.txt build I figure I need to somehow escape the output or something to keep the expansion from happening, but have yet to find something that works. There are other ways to do this, but I hate when I cannot figure something out. and I have to admin, I think this one beat me ( well given the time I can spend on it )

    Read the article

  • Can't connect to MySql database from server using Devart

    - by annelie
    Hello, I'm having trouble connecting to a MySql database from a server. I'm using Devart to connect, and I'm not sure if I need to do something more than to simply reference the dlls. When I started working on this project it was referencing the CoreLabs dlls, which if I've understood is the previous name for Devart. That didn't work, so I downloaded the new Devart dlls instead. It works on my local machine, but when uploading to the server it crashes. I made a tiny console app to test the connection, and it fails when initialising, before I've even assigned a host etc. Do I need to do anything more than just upload my .exe file to the server? using System; using System.Collections.Generic; using System.Linq; using System.Text; using Devart.Data; using Devart.Data.MySql; namespace TestDatabaseConnection { public class Program { public static void Main(string[] args) { Console.WriteLine("about to connect"); ConnectToDatabase(); Console.ReadLine(); } public static void ConnectToDatabase() { MySqlConnection connection = new MySqlConnection(); } } } UPDATE: I can't see what the error is, I had a try catch around MySqlConnection connection = new MySqlConnection(); but no exception is thrown, it just crashes. Thanks, Annelie

    Read the article

  • How to suppress/control logging of Wagon-FTP Maven extension?

    - by Vincenzo
    I'm deploying Maven site by FTP, using Wagon-FTP. Works fine, but output is full of FTP connection/authentication details, which effectively expose logins and passwords to everybody (especially if the project is open source and its CI protocols are publicly accessible): [...] [INFO] [INFO] --- maven-site-plugin:3.0-beta-3:deploy (default-deploy) @ rempl --- Reply received: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 1 of 50 allowed. 220-Local time is now 09:08. Server port: 21. 220 You will be disconnected after 15 minutes of inactivity. Command sent: USER **** Reply received: 331 User **** OK. Password required Command sent: PASS ******** Reply received: 230-User **** has group access to: *** 230 OK. Current restricted directory is / [...] Is it possible to suppress this logging? Or configure it... This is a section of my pom.xml, where Wagon-FTP is used: [...] <build> <extensions> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ftp</artifactId> <version>1.0-beta-7</version> </extension> </extensions> [...] </build> [...]

    Read the article

  • How to minor updates to Drupal-6 with shared hosting

    - by marty.fried
    I've got Drupal working on a shared host, and I uploaded some modules from my home system successfully, but I've got the message that there is a security update for my version, and I should update immediately. I'm not sure how I'm supposed to do that. It seems like the update is an entire new installation. I originally installed it using the hosting company's installer, Fantastico. Should I simply over-write the existing installation with the new files? Or ignore the message? I realize I shouldn't over-write the sites folder, or anything I've modified. The instructions that come with the download seem to be for a major version upgrade, and are way too much trouble for frequent security updates. Searching Drupal's site shows many other methods, but no indication of anything official. And some were ridiculously error-prone, and not really useful. I don't have shell access to the hosting site, although I can pay extra to get it if I really need to. Or, maybe I can clone the site on my local Linux system, do the update using a script, then upload the whole thing. Does anyone have experience with this situation?

    Read the article

  • Explicit localization problem

    - by X-Dev
    when trying to translate the confirmation message to Norwegian i get the following error: Cannot have more than one binding on property 'OnClientClick' on 'System.Web.UI.WebControls.LinkButton'. Ensure that this property is not bound through an implicit expression, for example, using meta:resourcekey. i use Explicit localization in the following manner: <asp:LinkButton ID="lnkMarkInvoiced" runat="server" OnClick="lnkMarkInvoiced_OnClick" OnClientClick="<%# Resources: lnkMarkInvoicedResource.OnClientClick%>" Visible="False" CssClass="stdtext" meta:resourcekey="lnkMarkInvoicedResource" ></asp:LinkButton> here's the local resource file entry: <data name="lnkMarkInvoicedResource.OnClientClick" xml:space="preserve"> <value>return confirm('Er du sikker?');</value> if i remove the meta attribute i get the English text(default). how do i get the Norwegian text appearing without resorting to using the code behind? Update: removing the meta attribute prevents the exception from occurring but the original problem still exists. I can't get the Norwegian text to show. only the default English text shows. Another Update: I know this question is getting old but i still can't get the Norwegian text to display. If anyone has some tips please post a response.

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >