Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 680/719 | < Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >

  • AFNetworking PostPath php Parameters are null

    - by Alejandro Escobar
    I am trying to send a username and password from an iOS app using AFNetworking framework to a php script. The iOS app continues to receive status code 401 which I defined to be "not enough parameters". I have tried returning the "username" from the php script to the iOS app and receive . Based on what I've been investigating so far, it seems as though: 1) The php script is not decoding the POST parameters properly 2) The iOS app is not sending the POST parameters properly The following is the iOS function - (IBAction)startLoginProcess:(id)sender { NSString *usernameField = usernameTextField.text; NSString *passwordField = passwordTextField.text; NSDictionary *parameters = [NSDictionary dictionaryWithObjectsAndKeys:usernameField, @"username", passwordField, @"password", nil]; NSURL *url = [NSURL URLWithString:@"http://localhost/~alejandroe1790/edella_admin/"]; AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:url]; [httpClient defaultValueForHeader:@"Accept"]; [httpClient setParameterEncoding:AFJSONParameterEncoding]; [httpClient postPath:@"login.php" parameters:parameters success:^(AFHTTPRequestOperation *operation, id response) { NSLog(@"operation hasAcceptableStatusCode: %d", [operation.response statusCode]); } failure:^(AFHTTPRequestOperation *operation, NSError *error) { NSLog(@"Error with request"); NSLog(@"%@",[error localizedDescription]); }]; } The following is the php script function checkLogin() { // Check for required parameters if (isset($_POST["username"]) && isset($_POST["password"])) { //Put parameters into local variables $username = $_POST["username"]; $password = $_POST["password"]; $stmt = $this->db->prepare("SELECT Password FROM Admin WHERE Username=?"); $stmt->bind_param('s', $username); $stmt->execute(); $stmt->bind_result($resultpassword); while ($stmt->fetch()) { break; } $stmt->close(); // Username or password invalid if ($password == $resultpassword) { sendResponse(100, 'Login successful'); return true; } else { sendResponse(400, 'Invalid Username or Password'); return false; } } sendResponse(401, 'Not enough parameters'); return false; } I feel like I may be missing something. Any assistance would be great.

    Read the article

  • check if directory exists c#

    - by Ant
    I am trying to see if a directory exists based on an input field from the user. When the user types in the path, I want to check if the path actually exists. I have some c# code already. It returns 1 for any local path, but always returns 0 when I am checking a network path. static string checkValidPath(string path) { //Insert your code that runs under the security context of the authenticating user here. using (ImpersonateUser user = new ImpersonateUser(user, "", password)) { //DirectoryInfo d = new DirectoryInfo(quotelessPath); bool doesExist = Directory.Exists(path); //if (d.Exists) if(doesExist) { user.Dispose(); return "1"; } else { user.Dispose(); return "0"; } } } public class ImpersonateUser : IDisposable { [DllImport("advapi32.dll", SetLastError = true)] private static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken); [DllImport("kernel32", SetLastError = true)] private static extern bool CloseHandle(IntPtr hObject); private IntPtr userHandle = IntPtr.Zero; private WindowsImpersonationContext impersonationContext; public ImpersonateUser(string user, string domain, string password) { if (!string.IsNullOrEmpty(user)) { // Call LogonUser to get a token for the user bool loggedOn = LogonUser(user, domain, password, 9 /*(int)LogonType.LOGON32_LOGON_NEW_CREDENTIALS*/, 3 /*(int)LogonProvider.LOGON32_PROVIDER_WINNT50*/, out userHandle); if (!loggedOn) throw new Win32Exception(Marshal.GetLastWin32Error()); // Begin impersonating the user impersonationContext = WindowsIdentity.Impersonate(userHandle); } } public void Dispose() { if (userHandle != IntPtr.Zero) CloseHandle(userHandle); if (impersonationContext != null) impersonationContext.Undo(); } } Any help is appreciated. Thanks! EDIT 3: updated code to use BrokenGlass's impersonation functions. However, I need to initialize "password" to something... EDIT 2: I updated the code to try and use impersonation as suggested below. It still fails everytime. I assume I am using impersonation improperly... EDIT: As requested by ChrisF, here is the function that calls the checkValidPath function. Frontend aspx file... $.get('processor.ashx', { a: '7', path: x }, function(o) { alert(o); if (o=="0") { $("#outputPathDivValid").dialog({ title: 'Output Path is not valid! Please enter a path that exists!', width: 500, modal: true, resizable: false, buttons: { 'Close': function() { $(this).dialog('close'); } } }); } }); Backend ashx file... public void ProcessRequest (HttpContext context) { context.Response.Cache.SetExpires(DateTime.Now); string sSid = context.Request["sid"]; switch (context.Request["a"]) {//a bunch of case statements here... case "7": context.Response.Write(checkValidPath(context.Request["path"].ToString())); break;

    Read the article

  • Is there a reason why a base class decorated with XmlInclude would still throw a type unknown exception when serialized?

    - by Tedford
    I will simplify the code to save space but what is presented does illustrate the core problem. I have a class which has a property that is a base type. There exist 3 dervived classes which could be assigned to that property. If I assign any of the derived classes to the container then the XmlSerializer throws dreaded "The type xxx was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically." exception when attempting to seralize the container. However my base class is already decorated with that attribute so I figure there must be an additional "hidden" requirement. The really odd part is that the default WCF serializer has no issues with this class hierarchy. The Container class [DataContract] [XmlRoot(ElementName = "TRANSACTION", Namespace = Constants.Namespace)] public class PaymentSummaryRequest : CommandRequest { /// <summary> /// Gets or sets the summary. /// </summary> /// <value>The summary.</value> /// <remarks></remarks> [DataMember] public PaymentSummary Summary { get; set; } /// <summary> /// Initializes a new instance of the <see cref="PaymentSummaryRequest"/> class. /// </summary> public PaymentSummaryRequest() { Mechanism = CommandMechanism.PaymentSummary; } } The base class [DataContract] [XmlInclude(typeof(xxxPaymentSummary))] [XmlInclude(typeof(yyyPaymentSummary))] [XmlInclude(typeof(zzzPaymentSummary))] [KnownType(typeof(xxxPaymentSummary))] [KnownType(typeof(xxxPaymentSummary))] [KnownType(typeof(zzzPaymentSummary))] public abstract class PaymentSummary { } One of the derived classes [DataContract] public class xxxPaymentSummary : PaymentSummary { } The serialization code var serializer = new XmlSerializer(typeof(PaymentSummaryRequest)); serializer.Serialize(Console.Out,new PaymentSummaryRequest{Summary = new xxxPaymentSummary{}}); The Exception System.InvalidOperationException: There was an error generating the XML document. --- System.InvalidOperationException: The type xxxPaymentSummary was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically. at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write13_PaymentSummary(String n, String ns, PaymentSummary o, Boolean isNullable, Boolean needType) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write14_PaymentSummaryRequest(String n, String ns, PaymentSummaryRequest o, Boolean isNullable, Boolean needType) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write15_TRANSACTION(Object o) --- End of inner exception stack trace --- at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id) at System.Xml.Serialization.XmlSerializer.Serialize(TextWriter textWriter, Object o, XmlSerializerNamespaces namespaces) at UserQuery.RunUserAuthoredQuery() in c:\Users\Tedford\AppData\Local\Temp\uqacncyo.0.cs:line 47

    Read the article

  • CodeIgniter Third party class not loading

    - by Jatin Soni
    I am trying to implement Dashboard widget class (found here: http://harpanet.com/programming/php/codeigniter/dashboard/index#installation) but it is giving me error Unable to load the requested class I have tried to add this class in autoload as well as menually to my controller $this->load->library('dash') but this also giving the same error. I have checked dash.php and found below method private function __example__() but can't understand what the developer is saying in comment. class Dash { private function __example__() { /* * This function is purely to show an example of a dashboard method to place * within your own controller. */ // load third_party hArpanet dashboard library $this->load->add_package_path(APPPATH.'third_party/hArpanet/hDash/'); $dash =& $this->load->library('dash'); $this->load->remove_package_path(APPPATH.'third_party/hArpanet/hDash/'); // configure dashboard widgets - format: type, src, title, cols, alt (for images) $dash->widgets = array( array('type'=>'oop', 'src'=>'test_dash', 'title'=>'Test OOP Widget', 'cols'=>3), // if 'title' is set to FALSE, the title block is omitted entirely // note: this is an 'html' widget but is being fed content from a local method array('type'=>'html', 'src'=>self::test_method(), 'title'=>false, 'cols'=>3), array('type'=>'file', 'src'=>'saf_inv.htm', 'title'=>'Safety Investigation'), // multi-content widget - set widget title in outer array (also note use of CI anchor to create a link) array('title'=>anchor('tz', 'TARGET ZERO'), // sub-content follows same array format as single content widget // 'img' content can also have an 'alt' text array('type'=>'img', 'src'=>'saf_tzout.gif', 'alt'=>'Action Completed'), array('type'=>'file', 'src'=>'saf_tz.htm'), array('type'=>'file', 'src'=>'ave_close.htm', 'title'=>'Average Time to Close') ), array('type'=>'file', 'src'=>'saf_meet.htm', 'title'=>'Safety Meeting'), array('type'=>'file', 'src'=>'saf_acc.htm', 'title'=>'Accident Investigation'), array('type'=>'file', 'src'=>'saf_hazmat.htm', 'title'=>anchor('hazmat', 'HAZMAT')), array('type'=>'file', 'src'=>'saf_cont.htm', 'title'=>'Loss of Containment'), array('type'=>'file', 'src'=>'saf_worksinfo.htm', 'title'=>'Works Information'), // an action widget - 'clear' will generate a blank widget with a style of clear:both array('type'=>'clear'), // multi-content widget - width can be set using the 'cols' param in outer array array('title'=>'RAG Report', 'cols' => 2, array('type'=>'file', 'src'=>'saf_rag.htm'), array('type'=>'img', 'src'=>'ProcSaf.gif')), array('type'=>'file', 'src'=>'saf_chrom.htm', 'title'=>'Chrome checks'), ); // populate the view variable $widgets = $dash->build('safety'); // render the dashboard $this->load->view('layout_default', $widgets); } ................... } // end of Dash class Installation path is root/application/third_party/hArpanet/hDash/libraries/dash.php How can I load this class to my system and use widgets?

    Read the article

  • Running OpenMPI on Windows XP

    - by iamweird
    Hi there. I'm trying to build a simple cluster based on Windows XP. I compiled OpenMPI-1.4.2 successfully, and tools like mpicc and ompi_info work too, but I can't get my mpirun working properly. The only output I can see is Z:\orterun --hostfile z:\hosts.txt -np 2 hostname [host0:04728] Failed to initialize COM library. Error code = -2147417850 [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\mca\ess\hnp\ess_hnp_module.c at line 218 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_plm_init failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\runtime\orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\..\..\openmpi -1.4.2\orte\tools\orterun\orterun.c at line 543 Where z:\hosts.txt appears as follows: host0 host1 Z: is a shared network drive available to both host0 and host1. What my problem is and how do I fix it? Upd: Ok, this problem seems to be fixed. It seems to me that WideCap driver and/or software components causes this error to appear. A "clean" machine runs local task successfully. Anyway, I still cannot run a task within at least 2 machines, I'm getting following message: Z:\mpirun --hostfile z:\hosts.txt -np 2 hostname connecting to host1 username:cluster password:******** Save Credential?(Y/N) y [host0:04728] This feature hasn't been implemented yet. [host0:04728] Could not connect to namespace cimv2 on node host1. Error code =-2147024891 -------------------------------------------------------------------------- mpirun was unable to start the specified application as it encountered an error. More information may be available above. -------------------------------------------------------------------------- I googled a little and did all the things as described here: http://www.open-mpi.org/community/lists/users/2010/03/12355.php but I'm still getting the same error. Can anyone help me? Upd2: Error code -2147024891 might be WMI error WBEM_E_INVALID_PARAMETER (0x80041008) which occures when one of the parameters passed to the WMI call is not correct. Does this mean that the problem is in OpenMPI source code itself? Or maybe it's because of wrong/outdated wincred.h and credui.lib I used while building OpenMPI from the source code?

    Read the article

  • SpringBatch Jaxb2Marshaller: different name of class and xml attribute

    - by user588961
    I try to read an xml file as input for spring batch: Java Class: package de.example.schema.processes.standardprocess; @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "Process", namespace = "http://schema.example.de/processes/process", propOrder = { "input" }) public class Process implements Serializable { @XmlElement(namespace = "http://schema.example.de/processes/process") protected ProcessInput input; public ProcessInput getInput() { return input; } public void setInput(ProcessInput value) { this.input = value; } } SpringBatch dev-job.xml: <bean id="exampleReader" class="org.springframework.batch.item.xml.StaxEventItemReader" scope="step"> <property name="fragmentRootElementName" value="input" /> <property name="resource" value="file:#{jobParameters['dateiname']}" /> <property name="unmarshaller" ref="jaxb2Marshaller" /> </bean> <bean id="jaxb2Marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>de.example.schema.processes.standardprocess.Process</value> <value>de.example.schema.processes.standardprocess.ProcessInput</value> ... </list> </property> </bean> Input file: <?xml version="1.0" encoding="UTF-8"?> <process:process xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:process="http://schema.example.de/processes/process"> <process:input> ... </process:input> </process:process> It fires the following exception: [javax.xml.bind.UnmarshalException: unexpected element (uri:"http://schema.example.de/processes/process", local:"input"). Expected elements are <<{http://schema.example.de/processes/process}processInput] at org.springframework.oxm.jaxb.JaxbUtils.convertJaxbException(JaxbUtils.java:92) at org.springframework.oxm.jaxb.AbstractJaxbMarshaller.convertJaxbException(AbstractJaxbMarshaller.java:143) at org.springframework.oxm.jaxb.Jaxb2Marshaller.unmarshal(Jaxb2Marshaller.java:428) If I change to in xml it work's fine. Unfortunately I can change neither the xml nor the java class. Is there a possibility to make Jaxb2Marshaller map the element 'input' to the class 'ProcessInput'?

    Read the article

  • Linker Issues with boost::thread under linux using Eclipse and CMake

    - by OcularProgrammer
    I'm in the process of attempting to port some code across from PC to Ubuntu, and am having some issues due to limited experience developing under linux. We use CMake to generate all our build stuff. Under windows I'm making VS2010 projects, and under Linux I'm making Eclipse projects. I've managed to get my OpenCV stuff ported across successfully, but am having major headaches trying to port my threaded boost apps. Just so we're clear, the steps I have followed so-far on a clean Ubuntu 12 installation. (I've done 2 clean re-installs to try and fix potential library cock-ups, now I'm just giving up and asking): Install Eclipse and Eclipse CDT using my package manager Install CMake and CMake Gui using my package manager Install libboost-all-dev using my package manager So-far that's all I've done. I can create the eclipse project using CMake with no errors, so CMake is successfully finding my boost install. When I try and build through eclipse is when I get issues; The app I'm attempting to build uses boost::asio for some UDP I/O and boost::thread to create worker threads for the asio I/O services. I can successfully compile each module, but when I come to link I get spammed with errors such as: /usr/bin/c++ CMakeFiles/RE05DevelopmentDemo.dir/main.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/RE05FusionListener/RE05FusionListener.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/NewEye/NewEye.cpp.o -o RE05DevelopmentDemo -rdynamic -Wl,-Bstatic -lboost_system-mt -lboost_date_time-mt -lboost_regex-mt -lboost_thread-mt -Wl,-Bdynamic /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `void boost::call_once<void (*)()>(boost::once_flag&, void (*)()) [clone .constprop.98]': make[2]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo' (.text+0xc8): undefined reference to `pthread_key_create' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::interruption_enabled()': (.text+0x540): undefined reference to `pthread_getspecific' make[1]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()': (.text+0x570): undefined reference to `pthread_getspecific' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()': (.text+0x59f): undefined reference to `pthread_getspecific' Some Gotchas that I have collected from other StackOverflow posts and have already checked: The boost libs are all present at /usr/lib I am not getting any compile errors for inability to find the boost headers, so they must be getting found. I am trying to link statically, but I believe eclipse should be passing the correct arguments to make that happen since my CMakeLists.txt includes SET(Boost_USE_STATIC_LIBS ON) I'm officially out of ideas here, I have tried doing local builds of boost and a bunch of other stuff with no more success. I even re-installed Ubuntu to ensure I haven't completely fracked the libs directories and links with multiple weird versions or anything else. Any help would be muchly appreciated.

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Security review of an authenticated Diffie Hellman variant

    - by mtraut
    EDIT I'm still hoping for some advice on this, i tried to clarify my intentions... When i came upon device pairing in my mobile communication framework i studied a lot of papers on this topic and and also got some input from previous questions here. But, i didn't find a ready to implement protocol solution - so i invented a derivate and as i'm no crypto geek i'm not sure about the security caveats of the final solution: The main questions are Is SHA256 sufficient as a commit function? Is the addition of the shared secret as an authentication info in the commit string safe? What is the overall security of the 1024 bit group DH I assume at most 2^-24 bit probability of succesful MITM attack (because of 24 bit challenge). Is this plausible? What may be the most promising attack (besides ripping the device out off my numb, cold hands) This is the algorithm sketch For first time pairing, a solution proposed in "Key agreement in peer-to-peer wireless networks" (DH-SC) is implemented. I based it on a commitment derived from: A fix "UUID" for the communicating entity/role (128 bit, sent at protocol start, before commitment) The public DH key (192 bit private key, based on the 1024 bit Oakley group) A 24 bit random challenge Commit is computed using SHA256 c = sha256( UUID || DH pub || Chall) Both parties exchange this commitment, open and transfer the plain content of the above values. The 24 bit random is displayed to the user for manual authentication DH session key (128 bytes, see above) is computed When the user opts for persistent pairing, the session key is stored with the remote UUID as a shared secret Next time devices connect, commit is computed by additionally hashing the previous DH session key before the random challenge. For sure it is not transfered when opening. c = sha256( UUID || DH pub || DH sess || Chall) Now the user is not bothered authenticating when the local party can derive the same commitment using his own, stored previous DH session key. After succesful connection the new DH session key becomes the new shared secret. As this does not exactly fit the protocols i found so far (and as such their security proofs), i'd be very interested to get an opinion from some more crypto enabled guys here. BTW. i did read about the "EKE" protocol, but i'm not sure what the extra security level is.

    Read the article

  • Parsing string logic issue c#

    - by N0xus
    This is a follow on from this question My program is taking in a string that is comprised of two parts: a distance value and an id number respectively. I've split these up and stored them in local variables inside my program. All of the id numbers are stored in a dictionary and are used check the incoming distance value. Though I should note that each string that gets sent into my program from the device is passed along on a single string. The next time my program receives that a signal from a device, it overrides the previous data that was there before. Should the id key coming into my program match one inside my dictionary, then a variable held next to my dictionaries key, should be updated. However, when I run my program, I don't get 6 different values, I only get the same value and they all update at the same time. This is all the code I have written trying to do this: Dictionary<string, string> myDictonary = new Dictionary<string, string>(); string Value1 = ""; string Value2 = ""; string Value3 = ""; string Value4 = ""; string Value5 = ""; string Value6 = ""; void Start() { myDictonary.Add("11111111", Value1); myDictonary.Add("22222222", Value2); myDictonary.Add("33333333", Value3); myDictonary.Add("44444444", Value4); myDictonary.Add("55555555", Value5); myDictonary.Add("66666666", Value6); } private void AppendString(string message) { testMessage = message; string[] messages = message.Split(','); foreach(string w in messages) { if(!message.StartsWith(" ")) outputContent.text += w + "\n"; } messageCount = "RSSI number " + messages[0]; uuidString = "UUID number " + messages[1]; if(myDictonary.ContainsKey(messages[1])) { Value1 = messageCount; Value2 = messageCount; Value3 = messageCount; Value4 = messageCount; Value5 = messageCount; Value6 = messageCount; } } How can I get it so that when programs recives the first key, for example 1111111, it only updates Value1? The information that comes through can be dynamic, so I'd like to avoid harding as much information as I possibly can.

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work??

    - by themoondothshine
    Hey all, I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: -- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. -- An application is linked against libsome1.so. -- This application uses libdl.so to dynamically load another module, say libmagic.so. -- Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. -- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). -- However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2)... Nothing seems to work!!! Help!!!!!! Edit: I should have mentioned it earlier, but the app in question is Firefox, and libsome1.so is libsqlite3.so shipped with it. I don't quite have the option of recompiling them. Also, using version scripts to hide symbols seems to be the only solution right now. So what really happens when symbols are hidden? Do they become 'local' to the SO? Does rtld have no knowledge of their existence? What happens when an exported function refers to a hidden symbol?

    Read the article

  • php connecting to mysql server(localhost) very slow

    - by Ahmad
    actually its little complicated: summary: the connection to DB is very slow. the page rendering takes around 10 seconds but the last statement on the page is an echo and i can see its output while the page is loading in firefox (IE is same). in google chrome the output becomes visible only when the loading finishes. loading time is approximately the same across browsers. on debugging i found out that its the DB connectivity that is creating problem. the DB was on another machine. to debug further. i deployed the DB on my local machine .. so now the DB connection is at 127.0.0.1 but the connectivity still takes long time. this means that the issue is with APACHE/PHP and not with mysql. but then i deployed my code on another machine which connects to DB remotely.and everything seems fine. basically the application uses couple of mod_rewrite.. but i removed all the .htaccess files and the slow connectivity issue remains.. i installed another APACHE on my machine and used default settings. the connection was still very slow. i added following statements to measure the execution time $stime = microtime(); $stime = explode(" ",$stime); $stime = $stime[1] + $stime[0]; // my code -- it involves connection to DB $mtime = microtime(); $mtime = explode(" ",$mtime); $mtime = $mtime[1] + $mtime[0]; $totaltime = ($mtime - $stime); echo $totaltime; the output is 0.0631899833679 but firebug Net panel shows total loading time of 10-11 seconds. same is the case with google chrome i tried to turn off windows firewall.. connectivity is still slow and i just can't quite find the reason.. i've tried multiple DB servers.. multiple apaches.. nothing seems to be working.. any idea of what might be the problem?

    Read the article

  • Returning JSON in CFFunction and appending it to layer is causing an error

    - by Mel
    I'm using the qTip jQuery plugin to generate a dynamic tooltip. I'm getting an error in my JS, and I'm unsure if its source is the JSON or the JS. The tooltip calls the following function: (sorry about all this code, but it's necessary) <cffunction name="fGameDetails" access="remote" returnType="any" returnformat="JSON" output="false" hint="This grabs game details for the games.cfm page"> <!---Argument, which is the game ID---> <cfargument name="gameID" type="numeric" required="true" hint="CFC will look for GameID and retrieve its details"> <!---Local var---> <cfset var qGameDetails = ""> <!---Database query---> <cfquery name="qGameDetails" datasource="#REQUEST.datasource#"> SELECT titles.titleName AS tName, titles.titleBrief AS tBrief, games.gameID, games.titleID, games.releaseDate AS rDate, genres.genreName AS gName, platforms.platformAbbr AS pAbbr, platforms.platformName AS pName, creviews.cReviewScore AS rScore, ratings.ratingName AS rName FROM games Inner Join platforms ON platforms.platformID = games.platformID Inner Join titles ON titles.titleID = games.titleID Inner Join genres ON genres.genreID = games.genreID Inner Join creviews ON games.gameID = creviews.gameID Inner Join ratings ON ratings.ratingID = games.ratingID WHERE (games.gameID = #ARGUMENTS.gameID#); </cfquery> <cfreturn qGameDetails> </cffunction> This function returns the following JSON: { "COLUMNS": [ "TNAME", "TBRIEF", "GAMEID", "TITLEID", "RDATE", "GNAME", "PABBR", "PNAME", "RSCORE", "RNAME" ], "DATA": [ [ "Dark Void", "Ancient gods known as 'The Watchers,' once banished from our world by superhuman Adepts, have returned with a vengeance.", 154, 54, "January, 19 2010 00:00:00", "Action & Adventure", "PS3", "Playstation 3", 3.3, "14 Anos" ] ] } The problem I'm having is every time I try to append the JSON to the layer #catalog, I get a syntax error that says "missing parenthetical." This is the JavaScript I'm using: $(document).ready(function() { $('#catalog a[href]').each(function() { $(this).qtip( { content: { url: '/gamezilla/resources/components/viewgames.cfc?method=fGameDetails', data: { gameID: $(this).attr('href').match(/gameID=([0-9]+)$/)[1] }, method: 'get' }, api: { beforeContentUpdate: function(content) { var json = eval('(' + content + ')'); content = $('<div />').append( $('<h1 />', { html: json.TNAME })); return content; } }, style: { width: 300, height: 300, padding: 0, name: 'light', tip: { corner: 'leftMiddle', size: { x: 40, y : 40 } } }, position: { corner: { target: 'rightMiddle', tooltip: 'leftMiddle' } } }); }); }); Any ideas where I'm going wrong? I tried many things for several days and I can't find the issue. Many thanks!

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • BeanCreationException in Spring Framework .WAR deploy to Tomcat 6 on Ubuntu 9.10

    - by JediPotPie
    I am in the process of switching from a Windows box to Ubunutu and I want to run my own local instance of Tomcat 6. I have installed Tomcat 6 without any basic issues. When I try to deploy a .war file that I had running on the Tomcat 6 instance on my Windows box I am getting the following error.... Apr 26, 2010 3:30:27 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing Spring root WebApplicationContext Apr 26, 2010 3:30:27 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find class [com.ameren.eam.ldap.LdapDAONovellImpl] for bean with name 'testNovellDao' defined in ServletContext resource [/WEB-INF/applicationContext.xml]; nested exception is java.lang.ClassNotFoundException: com.ameren.eam.ldap.LdapDAONovellImpl at org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1173) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.predictBeanType(AbstractAutowireCapableBeanFactory.java:479) at org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:787) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:393) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:736) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:369) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:261) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3934) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4429) at org.apache.catalina.manager.ManagerServlet.start(ManagerServlet.java:1249) at org.apache.catalina.manager.HTMLManagerServlet.start(HTMLManagerServlet.java:612) at org.apache.catalina.manager.HTMLManagerServlet.doGet(HTMLManagerServlet.java:136) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.catalina.security.SecurityUtil$1.run(SecurityUtil.java:269) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAsPrivileged(Subject.java:537) at org.apache.catalina.security.SecurityUtil.execute(SecurityUtil.java:301) at org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:162) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:283) at org.apache.catalina.core.ApplicationFilterChain.access$000(ApplicationFilterChain.java:56) at org.apache.catalina.core.ApplicationFilterChain$1.run(ApplicationFilterChain.java:189) at java.security.AccessController.doPrivileged(Native Method) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:185) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:636) Caused by: java.lang.ClassNotFoundException: com.ameren.eam.ldap.LdapDAONovellImpl at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1399) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1245) at org.springframework.util.ClassUtils.forName(ClassUtils.java:230) at org.springframework.beans.factory.support.AbstractBeanDefinition.resolveBeanClass(AbstractBeanDefinition.java:381) at org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1170) ... 40 more The class that is not being found is located at /WEB-INF/classes/com/ameren/eam/ldap/LdapDAONovellImpl.class relative to /WEB-INF/applicationContext.xml. I cannot figure out why it cannot find the class? Any ideas would be great.

    Read the article

  • XmlSerializer.Deserialize blocks over NetworkStream

    - by Luca
    I'm trying to sends XML serializable objects over a network stream. I've already used this on an UDP broadcast server, where it receive UDP messages from the local network. Here a snippet of the server side: while (mServiceStopFlag == false) { if (mSocket.Available > 0) { IPEndPoint ipEndPoint = new IPEndPoint(IPAddress.Any, DiscoveryPort); byte[] bData; // Receive discovery message bData = mSocket.Receive(ref ipEndPoint); // Handle discovery message HandleDiscoveryMessage(ipEndPoint.Address, bData); ... Instead this is the client side: IPEndPoint ipEndPoint = new IPEndPoint(IPAddress.Broadcast, DiscoveryPort); MemoryStream mStream = new MemoryStream(); byte[] bData; // Create broadcast UDP server mSocket = new UdpClient(); mSocket.EnableBroadcast = true; // Create datagram data foreach (NetService s in ctx.Services) XmlHelper.SerializeClass<NetService>(mStream, s); bData = mStream.GetBuffer(); // Notify the services while (mServiceStopFlag == false) { mSocket.Send(bData, (int)mStream.Length, ipEndPoint); Thread.Sleep(DefaultServiceLatency); } It works very fine. But now i'me trying to get the same result, but on a TcpClient socket, but the using directly an XMLSerializer instance: On server side: TcpClient sSocket = k.Key; ServiceContext sContext = k.Value; Message msg = new Message(); while (sSocket.Connected == true) { if (sSocket.Available > 0) { StreamReader tr = new StreamReader(sSocket.GetStream()); msg = (Message)mXmlSerialize.Deserialize(tr); // Handle message msg = sContext.Handler(msg); // Reply with another message if (msg != null) mXmlSerialize.Serialize(sSocket.GetStream(), msg); } else Thread.Sleep(40); } And on client side: NetworkStream mSocketStream; Message rMessage; // Network stream mSocketStream = mSocket.GetStream(); // Send the message mXmlSerialize.Serialize(mSocketStream, msg); // Receive the answer rMessage = (Message)mXmlSerialize.Deserialize(mSocketStream); return (rMessage); The data is sent (Available property is greater then 0), but the method XmlSerialize.Deserialize (which should deserialize the Message class) blocks. What am I missing?

    Read the article

  • C++ SQLDriverConnect API

    - by harshalkreddy
    Hi, I am using visual studio 2008 and sql server 2008 for developing application(SQL server is in my system). I need to fetch some fields from the database. I am using the SQLDriverConnect API to connect to the database. If I use the "SQL_DRIVER_PROMPT" I will get pop window to select the data source. I don't want this window to appear. As per my understanding this window will appear if we provide insufficient information in the connection string. I think I have provided all the information. I am trying to connect with windows authentication. I tried different options but still no luck. Please help me in solving this problem. Below is the code that I am using: //******************************************************************************** // SQLDriverConnect_ref.cpp // compile with: odbc32.lib user32.lib #include <windows.h> #include <sqlext.h> int main() { SQLHENV henv; SQLHDBC hdbc; SQLHSTMT hstmt; SQLRETURN retcode; SQLWCHAR OutConnStr[255]; SQLSMALLINT OutConnStrLen; SQLCHAR ConnStrIn[255] = "DRIVER={SQL Server};SERVER=(local);DSN=MyDSN;DATABASE=MyDatabase;Trusted_Connection=yes;"; //SQLWCHAR *ConntStr =(SQLWCHAR *) "DRIVER={SQL Server};DSN=MyDSN;"; HWND desktopHandle = GetDesktopWindow(); // desktop's window handle // Allocate environment handle retcode = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv); // Set the ODBC version environment attribute if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, (SQLPOINTER*)SQL_OV_ODBC3, 0); // Allocate connection handle if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc); // Set login timeout to 5 seconds if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { SQLSetConnectAttr(hdbc, SQL_LOGIN_TIMEOUT, (SQLPOINTER)5, 0); retcode = SQLDriverConnect( // SQL_NULL_HDBC hdbc, desktopHandle, (SQLWCHAR *)ConnStrIn, SQL_NTS, OutConnStr, 255, &OutConnStrLen, SQL_DRIVER_NOPROMPT); // Allocate statement handle if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt); // Process data if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { SQLFreeHandle(SQL_HANDLE_STMT, hstmt); } SQLDisconnect(hdbc); } SQLFreeHandle(SQL_HANDLE_DBC, hdbc); } } SQLFreeHandle(SQL_HANDLE_ENV, henv); } } //******************************************************************************** Thanks in advance, Harsha

    Read the article

  • [NHibernate and ASP.NET MVC] How can I implement a robust session-per-request pattern in my project,

    - by Guillaume Gervais
    I'm currently building an ASP.NET MVC project, with NHibernate as its persistance layer. For now, some functionnalities have been implemented, but only use local NHibernate sessions: each method that accessed the database (read or write) needs to instanciate its own NHibernate session, with the "using()" directive. The problem is that I want to leverage NHibernate's Lazy-Loading capabilities to improve the performance of my project. This implies an open NHibernate session per request until the view is rendered. Furthermore, simultaneous request must be supported (multiple Sessions at the same time). How can I achieve that as cleanly as possible? I searched the Web a little bit and learned about the session-per-request pattern. Most of the implementations I saw used some sort of Http* (HttpContext, etc.) object to store the session. Also, using the Application_BeginRequest/Application_EndRequest functions is complicated, since they get fired for each HTTP request (aspx files, css files, js files, etc.), when I only want to instanciate a session once per request. The concern that I have is that I don't want my views or controllers to have access to NHibernate sessions (or, more generally, NHibernate namespaces and code). That means that I do not want to handle sessions at the controller level nor the view one. I have a few options in mind. Which one seems the best ? Use interceptors (like in GRAILS) that get triggered before and after the controller action. These would open and close sessions/transactions. Is it possible in the ASP.NET MVC world? Use the CurrentSessionContext Singleton provided by NHibernate in a Web context. Using this page as an example, I think this is quite promising, but that still requires filters at the controller level. Use the HttpContext.Current.Items to store the request session. This, coupled with a few lines of code in Global.asax.cs, can easily provide me with a session on the request level. However, it means that dependencies will be injected between NHibernate and my views (HttpContext). Thank you very much!

    Read the article

  • Rendering problem with UITableview

    - by Spider-Paddy
    I have a very strange problem with a UITableview within a navigation controller on the iPhone simulator. Of the cells displayed, only some are correctly rendered. They are all supposed to look the same but the majority are missing the accessory I've set, scrolling the view changes which cell has the accessory so I suspect it's some sort of cell caching happening, although the contents are correct for each cell. I also set an image as the background and that was also only displaying sporadically but I fixed that by changing cell.backgroundView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; (which also only rendered a random cell with the background) to cell.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; I now need to fix the problem with the accessory only showing on a random cell. I tried moving the code from cellForRowAtIndex to willDisplayCell but it made no difference. I put in a log command to confirm that it is running through each frame. Basically it's a table view (UITableViewCellStyleSubtitle) that gets its info from a server & is then updated by a delegate method calling reload. Code is: -(void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"%@", [NSString stringWithFormat:@"Setting colours for cell %i", indexPath.row]); // Set cell background // cell.backgroundView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; cell.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; cell.textLabel.backgroundColor = [UIColor clearColor]; cell.detailTextLabel.backgroundColor = [UIColor clearColor]; // detailTableViewAccessory is a view containing an imageview in this view's nib // i.e. nib <- view <- imageview <- image cell.accessoryView = detailTableViewAccessory; } // Called by data fetching object when done -(void)listDataTransferComplete:(ArticleListParser *)articleListParserObject { NSLog(@"Data parsed, reloading detail table"); self.currentTotalResultPages = (((articleListParserObject.currentArticleCount - 1) / 10) + 1); self.detailTableDataSource = [articleListParserObject.returnedArray copy]; // make a local copy of the returned array // Render table again with returned array data (neither of these 2 fixed it) [self.detailTableView performSelectorOnMainThread:@selector(reloadData) withObject:nil waitUntilDone:NO]; // [self.detailTableView reloadData]; // Re-enable necessary buttons (including table cells) letUserSelectRow = TRUE; [btnByName setEnabled:TRUE]; [btnByPrice setEnabled:TRUE]; // Remove please wait message NSLog(@"Removing please wait view"); [pleaseWaitViewControllerObject.view removeFromSuperview]; } I only included code that I thought was relevant, can supply more if needed. I can't test it on an iPhone yet so I don't know if it's maybe just a simulator anomaly or a bug in my code. I've always gotten good feedback from questions, any ideas?

    Read the article

  • Copy image to BLOB from client pc aka Java function in Oracle

    - by mumich
    Hi guys, I've been stuck with this for past two days. I've go java function stored in Oracle system which is supposed to copy image from local drive do remote database and store it in BLOB - it's called CopyBLOB and looks like this: import java.sql.*; import oracle.sql.*; import java.io.*; public class CopyBLOB { static int id; static String fileName = null; static Connection conn = null; public CopyBLOB(int idz, String f) { id = idz; fileName = f; } public static void copy(int ident, String path) throws SQLException, FileNotFoundException { CopyBLOB cpB = new CopyBLOB(ident, path); cpB.getConnection(); cpB.callUpdate(id, fileName); } public void getConnection() throws SQLException { DriverManager.registerDriver (new oracle.jdbc.OracleDriver()); try { conn = DriverManager.getConnection("jdbc:oracle:thin:@oraserv.ms.mff.cuni.cz:1521:db", "xxx", "xxx"); } catch (SQLException sqlex) { System.out.println("SQLException while getting db connection: "+sqlex); if (conn != null) conn.close(); } catch (Exception ex) { System.out.println("Exception while getting db connection: "+ex); if (conn != null) conn.close(); } } public void callUpdate(int id, String file ) throws SQLException, FileNotFoundException { CallableStatement cs = null; try { conn.setAutoCommit(false); File f = new File(file); FileInputStream fin = new FileInputStream(f); cs = (CallableStatement) conn.prepareCall( "begin add_image(?,?); end;" ); cs.setInt(1, id ); cs.setBinaryStream(2, fin, (int) f.length()); cs.execute(); conn.setAutoCommit(true); } catch ( SQLException sqlex ) { System.out.println("SQLException in callUpdateUsingStream method of given status : " + sqlex.getMessage() ); } catch ( FileNotFoundException fnex ) { System.out.println("FileNotFoundException in callUpdateUsingStream method of given status : " + fnex.getMessage() ); } finally { try { if (cs != null) cs.close(); if (conn != null) conn.close(); } catch ( Exception ex ) { System.out.println("Some exception in callUpdateUsingStream method of given status : " + ex.getMessage( ) ); } } } } The wrapper function is defined in package "MyPackage" as folows: procedure image_adder( id varchar2, path varchar2 ) AS language java name 'CopyBLOB.copy(java.lang.String, java.lang.String)'; And the inserting function called image_add is as simple as this: procedure add_image( id numeric(10), pic blob) AS BEGIN insert into pictures values (seq_pic.nextval, id, pic); END add_image; Now the problem: When I type call MyPackage.image_adder(1, 'd:\samples\img.jpg'); I get the ORA-29531 Error: No method copy in class CopyBLOB. Can you help me, please?

    Read the article

  • How to reliably categorize HTTP sessions in proxy to corresponding browser' windows/tabs user is viewing?

    - by Jehonathan
    I was using the Fiddler core .Net library as a local proxy to record the user activity in web. However I ended up with a problem which seems dirty to solve. I have a web browser say Google Chrome, and the user opened like 10 different tabs each with different web URLs. The problem is that the proxy records all the HTTP session initiated by each pages separately, causing me to figure out using my intelligence the tab which the corresponding HTTP session belonged to. I understand that this is because of the stateless nature of HTTP protocol. However I am just wondering is there an easy way to do this? I ended up with below c# code for that in Fiddler. Still its not a reliable solution due to the heuristics. This is a modification of the sample project bundled with Fiddler core for .NET 4. Basically what it does is filtering HTTP sessions initiated in last few seconds to find the first request or switching to another page made by the same tab in browser. It almost works, but not seems to be a universal solution. Fiddler.FiddlerApplication.AfterSessionComplete += delegate(Fiddler.Session oS) { //exclude other HTTP methods if (oS.oRequest.headers.HTTPMethod == "GET" || oS.oRequest.headers.HTTPMethod == "POST") //exclude other HTTP Status codes if (oS.oResponse.headers.HTTPResponseStatus == "200 OK" || oS.oResponse.headers.HTTPResponseStatus == "304 Not Modified") { //exclude other MIME responses (allow only text/html) var accept = oS.oRequest.headers.FindAll("Accept"); if (accept != null) { if(accept.Count>0) if (accept[0].Value.Contains("text/html")) { //exclude AJAX if (!oS.oRequest.headers.Exists("X-Requested-With")) { //find the referer for this request var referer = oS.oRequest.headers.FindAll("Referer"); //if no referer then assume this as a new request and display the same if(referer!=null) { //if no referer then assume this as a new request and display the same if (referer.Count > 0) { //lock the sessions Monitor.Enter(oAllSessions); //filter further using the response if (oS.oResponse.MIMEType == string.Empty || oS.oResponse.MIMEType == "text/html") //get all previous sessions with the same process ID this session request if(oAllSessions.FindAll(a=>a.LocalProcessID == oS.LocalProcessID) //get all previous sessions within last second (assuming the new tab opened initiated multiple sessions other than parent) .FindAll(z => (z.Timers.ClientBeginRequest > oS.Timers.ClientBeginRequest.AddSeconds(-1))) //get all previous sessions that belongs to the same port of the current session .FindAll(b=>b.port == oS.port ).FindAll(c=>c.clientIP ==oS.clientIP) //get all previus sessions with the same referrer URL of the current session .FindAll(y => referer[0].Value.Equals(y.fullUrl)) //get all previous sessions with the same host name of the current session .FindAll(m=>m.hostname==oS.hostname).Count==0 ) //if count ==0 that means this is the parent request Console.WriteLine(oS.fullUrl); //unlock sessions Monitor.Exit(oAllSessions); } else Console.WriteLine(oS.fullUrl); } else Console.WriteLine(oS.fullUrl); Console.WriteLine(); } } } } };

    Read the article

  • What alternatives do I have for source control and does GIT does that?

    - by RubberDuck
    I work as a freelancer programmer for some clients and also create apps for myself. When I work for myself, obviously I work alone. I generally don't work in a linear way. My big problems today are: I have a lot of apps that use the same classes I have developed; In the past, I put all these common classes on a directory outside all projects and included them on my apps using absolute paths, but this method sucks because by accident (if you forget) you may change a path or the disk and all projects are broken. Then I decided to copy those classes to my projects every time. Because the majority of these classes do not change frequently, I am relatively ok, but when they change, I am in hell; When I change one of these classes I have to propagate the changes to all other apps using copies of them. I have also tried to create frameworks but thanks to Apple, I cannot create frameworks for iOS and have to create libraries and bundles and create a nightmare of paths from one to the other and to the project to make that sh!t works. So, I am done with frameworks/libraries on Xcode until Xcode is a decent IDE. So, I see I need something better to manage my source code. What I need is this (I never used GIT on Xcode. I have read Apple docs but I still have these points): does git locally on Xcode allows me to deal with assets or just code? Can I have the equivalent of a "framework" (code + assets) managed by git locally? Can an entire xcodeproj be managed as a unity? I mean, Suppose I have a xcodeproj created and want GIT to manage it. How do I enable git on a project that was created without it and start designating files for management. (I have enabled git on Xcode's preferences, but all source control menu is grayed out). Is git the best option? Do I have another? Remember that my main condition is that the files should stay on the local computer. Please save me (I am a bit dramatic today). Thanks.

    Read the article

  • How do make dependency generation work for C? (Also..decode this sed/make statement!)

    - by Derek
    Hi all. I have a make build system that I am trying to decipher that someone else wrote. I am getting an error when I run it on a redhat system, but not when I run it on my solaris system. The versions of gmake are the same major revision (one off on minor revision). This is for building a C project, and the make system has a global Makefile.global that is inherited by each directory's local Makefile The Makefile.global has all the targets in it, starting with all: $(LIB) $(BIN) where LIB builds libs and BIN builds binaries. jumping down the targets I have $(LIB) : $(GEN_LIB) $(GEN_LIB) : $(GEN_DEPS) $(GEN_OBJS) $(AR) $(ARFLAGS) $(GEN_LIB) $(GEN_OBJS) $(GEN_DEPS) : @set -e; rm -f $@; \ $(CC) $(CDEP_FLAG) $(CFLAGS) $(INCDIRS) `basename $@ | sed 's/\.d/\.c/' | sed 's,^,$(HOME_SRC)/,'` | sed 's,\(.*\)\.o: ,$(GEN_OBJDIR)/\1.o $@ :,g' > [email protected] ; \ cat [email protected] > $@ ; \ cat [email protected] | cut -d: -f2 | grep '\.h' | sed 's,\.h,.h :,g' >> $@ ; \ rm [email protected] $(GEN_OBJS) : $(CC) $(CFLAGS) $(INCDIRS) -c $(*F).c -lmpi -o $@ I think these are all the relevant targets I need to include to answer my question. Definitions of those variables: CC = icc CDEP_FLAG = -M CFLAGS = various compiler flags ifdef type flags INCDIRS = include directory where all .h files are GEN_OBJDIR = /lib/objs HOME_SRC = . GEN_LIB = lib/$(LIB) GEN_DEPDIR=/lib/deps GEN_DEPS = $(addprefix $(GEN_DEPDIR)/,$(addsuffix .d,$(basename $(OBJS)))) I think this has everything covered you need. Basically self explanatory from the names. Now as best I can tell, this is generating in /lib/deps a .d file that has the object and source dependencies in it. In other words, for the utilities.a library, I will get a utils.o and utils.c dependency stack, all in the file utils.d There is some syntax error that is being generated in that file I think, because I get the following error: ../lib/deps/util.d:25: *** target pattern contains no '%'. Stop. gmake[2]: *** [all] Error 2 gmake[1]: *** [all] Error 2 gmake: *** [all] Error 2 I am not sure if my error is in the dependency generation, or some further down part, like the object generation target? If you need further info, let me know, I will add to post

    Read the article

  • DataGridView live display of datatable using virtual mode

    - by Chris
    I have a DataGridView that will display records (log entries) from a database. The amount of records that can exist at a time is very large. I would like to use the virtual mode feature of the DataGridView to display a page of data, and to minimize the amount of data that has to be transferred across a network at a given time. Polling for data is out of the question. There will be several clients running at a time, all of which are on the same network and viewing the records. If they all poll for data, the network will run very slowly. The data is read-only to the user; they won't be able to edit any of it, just view it. I need to know when updates occur in the database, and I need to update the screen with those updates accordingly using virtual mode. If a page of data a user is viewing contains data that has change, he/she will see those updates on that page. If updates were made to data in the database, but not in the data the user is viewing, then not much really changes on the user screen (Maybe just the scroll bar if records were added or removed). My current approach is using SQL server change tracking with the sync framework. Each client has a local SQL Server CE instance and database file that is kept in sync with the main database server. I use the information from the synchronization event to see if any changes were made to the main database and were sync'ed to the client. I need to use the DataGridView virtual mode here because I can't have thousands of records loaded into the DataGridView at once, otherwise memory usage goes through the roof. The main challenge right now is knowing how to use virtual mode to provide a seamless experience to the user by allowing them to scroll up and down through the records, and also have records update on the fly without interfering with the user inappropriately. Has anybody dealt with this issue before, and if so, where I can see how they did it? I've gone through some of the MSDN documentation and examples on virtual mode. So far, I haven't found documentation and/or examples on their site that explains how to do what I am trying to accomplish.

    Read the article

  • How can I force the server socket to re-accept a request from a client?

    - by Roman
    For those who does not want to read a long question here is a short version: A server has an opened socket for a client. The server gets a request to open a socket from the same client-IP and client-port. I want to fore the server not to refuse such a request but to close the old socket and open a new one. How can I do ti? And here is a long (original) question: I have the following situation. There is an established connection between a server and client. Then an external software (Bonjour) says to my client the it does not see the server in the local network. Well, client does nothing about that because of the following reasons: If Bonjour does not see the server it does not necessarily means that client cannot see the server. Even if the client trusts the Bonjour and close the socket it does not improve the situation ("to have no open socket" is worser that "to have a potentially bad socket"). So, client do nothing if server becomes invisible to Bonjour. But than the server re-appears in the Bonjour and Bonjour notify the client about that. In this situation the following situations are possible: The server reappears on a new IP address. So, the client needs to open a new socket to be able to communicate with the server. The server reappears on the old IP address. In this case we have two subcases: 2.1. The server was restarted (switched off and then switched on). So, it does not remember the old socket (which is still used by the client). So, client needs to close the old socket and open a new one (on the same server-IP address and the same server-port). 2.2. We had a temporal network problem and the server was running the whole time. So, the old socket is still available for the use. In this case the client does not really need to close the old socket and reopen a new one. But to simplify my life I decide to close and reopen the socket on the client side in any case (in spite on the fact that it is not really needed in the last described situation). But I can have problems with that solution. If I close the socket on the client side and than try to reopen a socket from the same client-IP and client-port, server will not accept the call for a new socket. The server will think that such a socket already exists. Can I write the server in such a way, that it does not refuse such calls. For example, if it (the server) sees that a client send a request for a socket from the same client-IP and client-port, it (server) close the available socket, associated with this client-IP and client-port and than it reopens a new socket.

    Read the article

< Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >