Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 289/356 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Need some help with a MySQL subquery count

    - by Ferdy
    I'm running into my own limits of MySQL query skills, so I hope some SQL guru can help out on this one. The situation is as follow: I have images that can be tagged. As you might expect this is stored in three tables: Image Tag Tag_map (maps images to tags) I have a SQL query that calculates the related tags based on a tag id. The query basically checks what other tags were used for images for images using that tag. Example: Image1 tagged as "Bear" Image2 tagged as "Bear" and "Canada" If I throw "Bear" (or its tag id) at the query, it will return "Canada". This works fine. Here's the query: SELECT tag.name, tag.id, COUNT(tag_map.id) as cnt FROM tag_map,tag WHERE tag_map.tag_id = tag.id AND tag.id != '185' AND tag_map.image_id IN (SELECT tag_map.image_id FROM tag_map INNER JOIN tag ON tag_map.tag_id = tag.id WHERE tag.id = '185') GROUP BY tag_map.id LIMIT 0,100 The part I'm stuck with is the count. For each related tag returned, I want to know how many images are in that tag. Currently it always returns 1, even if there are for example 3. I've tried counting different columns all resulting in the same output, so I guess there is a flaw in my thinking.

    Read the article

  • Import-Pssession is not importing cmdlets when used in a custom module

    - by Douglas Plumley
    I have a PowerShell script/function that works great when I use it in my PowerShell profile or manually copy/paste the function in the PowerShell window. I'm trying to make the function accessible to other members of my team as a module. I want to have the module stored in a central place so we can all add it to our PSModulePath. Here is a copy of the basic function: Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } If I save this function in my PowerShell profile it works fine. I can dot source a *.ps1 script with this function in it and it works as well. The issue is when I save the function as a *.psm1 PowerShell script module. The function runs fine but none of the exported commands from the Import-PSSession are available. I think this may have something to do with the module scope. I'm looking for suggestions on how to get around this. EDIT When I create the following module and run Connect-O365 the imported cmdlets will not be available. $scriptblock = { Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } } New-Module -Name "Office 365" -ScriptBlock $scriptblock When I import the next module without the Connect-O365 function the imported cmdlets are available. $scriptblock = { $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } New-Module -Name "Office 365" -ScriptBlock $scriptblock This appears to be a scope issue of some sort, just not sure how to get around it.

    Read the article

  • Ruby -- looking for some sort of "Regexp unescape" method

    - by RubyNoobie
    I have a bunch of strings that appear to have been double-escaped -- eg, I have "\\014\"\\000\"\\016smoothing\"\\011mean\"\\022color\"\\011zero@\\016" but I want "\014"\000"\016smoothing"\011mean"\022color"\011zero@\016" Is there a method I can use to unescape them? I imagine that I could make a regex to remove 1 backslash from every consecutive n backslashes, but I don't have a lot of regex experience and it seems there ought to be a "more elegant" way to do it. For example, when I puts MyString it displays the output I'd like, but I don't know how I might capture that into a variable. Thanks! Edited to add context: I have this class that is being used to marshal / restore some stuff, but when I restore some old strings it spits out a type error which I've determined is because they weren't -- for some inexplicable reason -- stored as base64. They instead appear to be 'double-escaped', when I need them to be 'single-escaped' to get restored. require 'base64' class MarshaledStuff < ActiveRecord::Base validates_presence_of :marshaled_obj def contents obj = self.marshaled_obj return Marshal.restore(Base64.decode64(obj)) end def contents=(newcontents) self.marshaled_obj = Base64.encode64(Marshal.dump(newcontents)) end end

    Read the article

  • String operations

    - by NEO
    I am trying to find the most repeated word in a string. My code is as follows: public class Word { private String toWord; private int Count; public Word(int count, String word){ toWord = word; Count = count; } public static void main(String args[]){ String str="my name is neo and my other name is also neo because I am neo"; String []str1=str.split(" "); Word w1=new Word(0,str1[0]); LinkedList<Word> list = new LinkedList<Word>(); list.add(w1); ListIterator itr = list.listIterator(); for(int i=1;i<str1.length;i++){ while(itr.hasNext()){ if(str1[i].equalsTO(????)); else list.add(new Word(0,str1[i])); } How do I compare the string from string array str1 to the string stored in the linked list and then how do i increase the respective count. I will then print the string with the highest count, I dont know how to do that either.

    Read the article

  • How to change scope data in controller?

    - by Derooie
    I am a real newbie at angular. I created something now which will let me retrieve and add items via angular and get/put them in mongodb. I use express and mongoose in the app The question is, how can i modify the data before it reaches the DOM in the controller. In this example i have created a way to retrieve data, and i get it exactly as it is stored in mongodb. What i would like is that the field where i store 1 or 0 in the database, to be shown as text. So if mongo has a value 1 i get "the value in mongo is 1" and when the field has a value of 0 get "the value is zero". (just as an example, i like other texts, but it illustrate what i want) I post my controller, html and current output. Any help would be appreciated. Controller function getGuests($scope, $http) { $scope.formData = {}; $http.get('/api/guests') .success(function(data) { $scope.x = data; }) .error(function(data) { console.log('Error: ' + data); }); } HTML <div ng-controller="getGuests"> <div ng-repeat="guest in x"> {{ guest.voornaam }} {{ guest.aanwezig }} </div> </div> The current scope output, what i see in HTML. I like to change only the value of "aanwezig" in case the value of aanwezig is 0 or 1. firstname1 1 firstname2 0 Something else, but would be great to learn, is how i can do a specific mongodb query by the push of a button and get that result.

    Read the article

  • How to get MySQL database to appear on index.php

    - by Teddy Truong
    Hi, I have a submission form on my website (index.php) and I have the data(user submissions) being stored into a MySQL database. Right now, I have the user submitting a post and then the page directs them to an update.php which shows what they inputed. However, I want all of the data in the database in MySQL to be shown on the index.php. It's a lot like a comment system. User submits a post... and sees their post above the other submitted posts all on the same page. I think I'm missing AJAX... ? Here is the code for index.php <div align="center"> <p>&nbsp;</p> <h2 align="center" class="Title"><em><strong>REDACTED</strong></em></h2> <form id="form1" name="form1" method="post" action="update.php"> <hr /> <label><br /> <form action="update.php" method="post"> REDACTED: <input type="text" name="text" /> <input type="submit" /> </form> </label> </form> </div> On update.php I have this: ?php $text = $_POST['text']; $myString = "REDACTED"; mysql_connect ("db----.net", "-----3", "------------") or die ('Error: ' . mysql_error()); mysql_select_db ("-----------"); $query="INSERT INTO TextArea (ID, text) VALUES ('NULL', '".$text."')"; mysql_query($query) or die ('Error updating database'); echo " $myString "," $text "; ?> Thanks a lot!

    Read the article

  • Load SQL query result data into cache in advance

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • JSF how to temporary disable validators to save draft

    - by Swiety
    I have a pretty complex form with lots of inputs and validators. For the user it takes pretty long time (even over an hour) to complete that, so they would like to be able to save the draft data, even if it violates rules like mandatory fields being not typed in. I believe this problem is common to many web applications, but can't find any well recognised pattern how this should be implemented. Can you please advise how to achieve that? For now I can see the following options: use of immediate=true on "Save draft" button doesn't work, as the UI data would not be stored on the bean, so I wouldn't be able to access it. Technically I could find the data in UI component tree, but traversing that doesn't seem to be a good idea. remove all the fields validation from the page and validate the data programmaticaly in the action listener defined for the form. Again, not a good idea, form is really complex, there are plenty of fields so validation implemented this way would be very messy. implement my own validators, that would be controlled by some request attribute, which would be set for standard form submission (with full validation expected) and would be unset for "save as draft" submission (when validation should be skipped). Again, not a good solution, I would need to provide my own wrappers for all validators I am using. But as you see no one is really reasonable. Is there really no simple solution to the problem?

    Read the article

  • Generic callbacks

    - by bobobobo
    Extends So, I'm trying to learn template metaprogramming better and I figure this is a good exercise for it. I'm trying to write code that can callback a function with any number of arguments I like passed to it. // First function to call int add( int x, int y ) ; // Second function to call double square( double x ) ; // Third func to call void go() ; The callback creation code should look like: // Write a callback object that // will be executed after 42ms for "add" Callback<int, int, int> c1 ; c1.func = add ; c1.args.push_back( 2 ); // these are the 2 args c1.args.push_back( 5 ); // to pass to the "add" function // when it is called Callback<double, double> c2 ; c2.func = square ; c2.args.push_back( 52.2 ) ; What I'm thinking is, using template metaprogramming I want to be able to declare callbacks like, write a struct like this (please keep in mind this is VERY PSEUDOcode) <TEMPLATING ACTION <<ANY NUMBER OF TYPES GO HERE>> > struct Callback { double execTime ; // when to execute TYPE1 (*func)( TYPE2 a, TYPE3 b ) ; void* argList ; // a stored list of arguments // to plug in when it is time to call __func__ } ; So for when called with Callback<int, int, int> c1 ; You would automatically get constructed for you by < HARDCORE TEMPLATING ACTION > a struct like struct Callback { double execTime ; // when to execute int (*func)( int a, int b ) ; void* argList ; // this would still be void*, // but I somehow need to remember // the types of the args.. } ; Any pointers in the right direction to get started on writing this?

    Read the article

  • How do you parse the XDG/gnome/kde menu/desktop item structure in c++??

    - by Joe Soul-bringer
    I would like to parse the menu structure for Gnome Panels (the standard Gnome Desktop application launcher) and it's KDE equivalent using c/c++ function calls. That is, I'd like a list of what the base menu categories and submenu are installed in a given machine. I would like to do with using fairly simple c/c++ function calls (with NO shelling out please). I understand that these menus are in the standard xdg format. I understand that this menu structure is stored in xml files such as: /home/user/.config/menus/applications.menu I've look here: http://www.freedesktop.org/wiki/Specifications/menu-spec?action=show&redirect=Standards%2Fmenu-spec but all they offer is the standard and some shell files to insert item entries (I don't want shell scripts, I don't want installation, I definitely don't want to create a c-library from the XDG specification. I want to find the existing menu structure). I've looked here: http://library.gnome.org/admin/system-admin-guide/stable/menustructure-13.html.en for more notes on these structures. None of this gives me a good idea of how determine the menu structures using a c/c++ program. The actual gnome menu structures seem to be a horrifically hairy things - they don't seem to show the menu structure but to give an XML-coded description of all the changes that the menus have gone through since installation. I assume gnome panels parses these file so there's a function buried somewhere to do this but I've yet to find where that function is after scanning library.gnome.org for a couple of days. I've scanned the Nautilus source code as well but Panels seem to exist elsewhere or are burried well. Thanks in advance

    Read the article

  • Generic property- specify the type at runtime

    - by Lirik
    I was reading a question on making a generic property, but I'm a little confused at by the last example from the first answer (I've included the relevant code below): You have to know the type at compile time. If you don't know the type at compile time then you must be storing it in an object, in which case you can add the following property to the Foo class: public object ConvertedValue { get { return Convert.ChangeType(Value, Type); } } That's seems strange: it's converting the value to the specified type, but it's returning it as an object when the value was stored as an object. Doesn't the returned object still require un-boxing? If it does, then why bother with the conversion of the type? I'm also trying to make a generic property whose type will be determined at run time: public class Foo { object Value {get;set;} Type ValType{get;set;} Foo(object value, Type type) { Value = value; ValType = type; } // I need a property that is actually // returned as the specified value type... public object ConvertedValue { get { return Convert.ChangeType(Value, ValType); } } } Is it possible to make a generic property? Does the return property still require unboxing after it's accessed?

    Read the article

  • JQuery - Fade in new Background Image?

    - by JasonS
    Hi, I think I may be trying something that isn't possible. I was recently tasked to create a html version of the flash site. On the flash site when you click on the body the image changes. I have done this with JQuery. Its brilliant! However.. it isn't "flash" enough. The powers that be want a fade effect between images. This is where I have come completely undone. This is how my script works at the moment. Images are stored in a div called photos. This is hidden. <div id="photos"> <img src="image.jpg" alt="Some caption" style="#page-bg-color" /> </div> When the page is loaded jquery checks to see if the first image is loaded. If it isn't then a loading symbol spins. When the image is loaded, it becomes the body background. $("body").css('background', 'url(' + photos[currentImage]["url"] + ') no-repeat 50% 50% fixed ' + photos[currentImage]["background"]); I have tried the following. I have tried animate. However, this doesn't work. I have tried this plugin: http://plugins.jquery.com/project/bgImageTransition This doesn't work either. It appears to clone the tag and do something. I assume it isn't working because you cannot clone the body tag. That or it is really old and is no longer compatible with the current version of JQuery. I fear that I have coded my way down a dead end. Does anybody know how I could make this work?

    Read the article

  • From where starts the process' memory space and where does it end?

    - by nhaa123
    Hi, I'm trying to dump memory from my application where the variables lye. Here's the function: void MyDump(const void *m, unsigned int n) { const unsigned char *p = reinterpret_cast<const unsigned char *(m); char buffer[16]; unsigned int mod = 0; for (unsigned int i = 0; i < n; ++i, ++mod) { if (mod % 16 == 0) { mod = 0; std::cout << " | "; for (unsigned short j = 0; j < 16; ++j) { switch (buffer[j]) { case 0xa: case 0xb: case 0xd: case 0xe: case 0xf: std::cout << " "; break; default: std::cout << buffer[j]; } } std::cout << "\n0x" << std::setfill('0') << std::setw(8) << std::hex << (long)i << " | "; } buffer[i % 16] = p[i]; std::cout << std::setw(2) << std::hex << static_cast<unsigned int(p[i]) << " "; if (i % 4 == 0 && i != 1) std::cout << " "; } } Now, how can I know from which address starts my process memory space, where all the variables are stored? And how do I now, how long the area is? For instance: MyDump(0x0000 /* <-- Starts from here? */, 0x1000 /* <-- This much? */); Best regards, nhaa123

    Read the article

  • Choose a XML node in SQL Server based on max value of a child element

    - by Jay
    I am trying to select from SQL Server 2005 XML datatype some values based on the max data that is located in a child node. I have multiple rows with XML similar to the following stored in a field in SQL Server: <user> <name>Joe</name> <token> <id>ABC123</id> <endDate>2013-06-16 18:48:50.111</endDate> </token> <token> <id>XYX456</id> <endDate>2014-01-01 18:48:50.111</endDate> </token> </user> I want to perform a select from this XML column where it determines the max date within the token element and would return the datarows similar to the result below for each record: Joe XYZ456 2014-01-01 18:48:50.111 I have tried to find a max function for xpath that would all me to select the correct token element but I couldn't find one that would work. I also tried to use the SQL MAX function but I wasn't able to get it working with that method either. If I only have a single token it of course works fine but when I have more than one I get a NULL, most likely because the query doesn't know which date to pull. I was hoping there would be a way to specify a where clause [max(endDate)] on the token element but haven't found a way to do that. Here is an example of the one that works when I only have a single token: SELECT XMLCOL.query('user/name').value('.','NVARCHAR(20)') as name XMLCOL.query('user/token/id').value('.','NVARCHAR(20)') as id XMLCOL.query('user/token/endDate').value(,'xs:datetime(.)','DATETIME') as endDate FROM MYTABLE

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Reverse search in Hibernate Search

    - by Javi
    Hello, I'm using Hibernate Search (which uses Lucene) for searching some Data I have indexed in a directory. It works fine but I need to do a reverse search. By reverse search I mean that I have a list of queries stored in my database I need to check which one of these queries match with a Data object each time Data Object is created. I need it to alert the user when a Data Object matches with a Query he has created. So I need to index this single Data Object which has just been created and see which queries of my list has this object as a result. I've seen Lucene MemoryIndex Class to create an index in memory so I can do something like this example for every query in a list (though iterating in a Java list of queries would not be very efficient): //Iterating over my list<Query> MemoryIndex index = new MemoryIndex(); //Add all fields index.addField("myField", "myFieldData", analyzer); ... QueryParser parser = new QueryParser("myField", analyzer); float score = index.search(query); if (score > 0.0f) { System.out.println("it's a match"); } else { System.out.println("no match found"); } The problem here is that this Data Class has several Hibernate Search Annotations @Field,@IndexedEmbedded,... which indicated how fields should be indexed, so when I invoke index() method on the FullTextEntityManager instance it uses this information to index the object in the directory. Is there a similar way to index it in memory using this information? Is there a more efficient way of doing this reverse search? Thanks

    Read the article

  • Automatically hyper-link URL's and Email's using C#, whilst leaving bespoke tags in place

    - by marcusstarnes
    I have a site that enables users to post messages to a forum. At present, if a user types a web address or email address and posts it, it's treated the same as any other piece of text. There are tools that enable the user to supply hyper-linked web and email addresses (via some bespoke tags/markup) - these are sometimes used, but not always. In addition, a bespoke 'Image' tag can also be used to reference images that are hosted on the web. My objective is to both cater for those that use these existing tools to generate hyper-linked addresses, but to also cater for those that simply type a web or email address in, and to then automatically convert this to a hyper-linked address for them (as soon as they submit their post). I've found one or two regular expressions that convert a plain string web or email address, however, I obviously don't want to perform any manipulation on addresses that are already being handled via the sites bespoke tagging, and that's where I'm stuck - how to EXCLUDE any web or email addresses that are already catered for via the bespoke tagging - I wan't to leave them as is. Here are some examples of bespoke tagging for the variations that I need to be left alone: [URL=www.msn.com]www.msn.com[/URL] [URL=http://www.msn.com]http://www.msn.com[/URL] [[email protected]][email protected][/EMAIL] [IMG]www.msn.com/images/test.jpg[/IMG] [IMG]http://www.msn.com/images/test.jpg[/IMG] The following examples would however ideally need to be automatically converted into web & email links respectively: www.msn.com http://www.msn.com [email protected] Ideally, the 'converted' links would just have the appropriate bespoke tags applied to them as per the initial examples earlier in this post, so rather than: <a href="..." etc. they'd become: [URL=http://www.. etc.) Unfortunately, we have a LOT of historic data stored with this bespoke tagging throughout, so for now, we'd like to retain that rather than implementing an entirely new way of storing our users posts. Any help would be much appreciated. Thanks.

    Read the article

  • mod_rewrite if file exists

    - by Mathieu Parent
    Hi everyone, I already have two rewrite rules that work correctly for now but some more code has to be added to work perfectly. I have a website hosted at mydomain.com and all subdom.mydomain.com are rewrited to mydomain.com/subs/subdom . My CMS has to handle the request if the file being reached does not exist, the rewrite is done like so: RewriteCond $1 !^subs/ RewriteCond %{HTTP_HOST} ^([^.]+)\.mydomain\.com$ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ subs/%1/index.php?page=$1 [L] My CMS handles the next part of the parsing as usual. The problem is if a file really exists, I need to link to it without passing through my CMS, I managed to do it like this: RewriteCond $1 !^subs/ RewriteCond %{HTTP_HOST} ^([^.]+)\.mydomain\.com$ RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*)$ subs/%1/$1 [L] So far it seems to work like a charm. Now I am being picky and I need to have default files that are stored in subs/default/. If the file exists in the subdomain folder, we should grab this one but if not, we need to get the file from the default subdomain. And if the file does not exist anywhere, we should be using the 404 page from the current subdomain unless there is none. I hope it describes well enough. Thank you for your time!

    Read the article

  • Running mysql query using node blocks the whole process and then timesout

    - by lobengula3rd
    I have a node javascript that uses mysql npm (Felix). I have a procedure stored in my DB which I call when the user selects an option to kind of create its own instance of the program. The user chooses for how long he wants that data to be initialized for him. This is suppsoed to be between 1 and 2 years. So if he choose 1 year this query will insert around 20,000 rows into 1 table. If I run this query and a local DB this takes around 30 seconds (I suppose it is reasonable because its a big query which should be done only once in 1 or 2 years so its ok). For some reason my node script freezes as if it can't handle any more calls from other users. The even worse problem is that after like 2 minutes my client ui gets like an error from the server. At this point not all the data that was supposed to enter the DB is entered. After waiting like another minute all the data finally gets to the DB and only then it will accept new requests. This is my connection: this.connection = mysql.createConnection({ host : '********rds.amazonaws.com', user : 'admin', password : '******', database : '*****' }); and this is my query function: this.createCourts = function (req, res, next){ connection.query('CALL filldates("' + req.body['startDate'] + '","' + req.body['endDate'] + '","' + req.body['numOfCourts'] + '","' + req.body['duration'] + '","' + req.body['sundayOpen'] + '","' + req.body['mondayOpen'] + '","' + req.body['tuesdayOpen'] + '","' + req.body['wednesdayOpen'] + '","' + req.body['thursdayOpen'] + '","' + req.body['fridayOpen'] + '","' + req.body['saturdayOpen'] + '","' + req.body['sundayClose'] + '","' + req.body['mondayClose'] + '","' + req.body['tuesdayClose'] + '","' + req.body['wednesdayClose'] + '","' + req.body['thursdayClose'] + '","' + req.body['fridayClose'] + '","' + req.body['saturdayClose'] + '");', function(err){ if (err){ console.log(err); } else return res.send(200); }); }; what am i missing here? as i understand connection.query should by async so why is it actually blocking my node script? thanks.

    Read the article

  • comparing salt and hashed passwords during login doesn't seem work right....

    - by Pandiya Chendur
    I stored salt and hash values of password during user registration... But during their login i then salt and hash the password given by the user, what happens is a new salt and a new hash is generated.... string password = collection["Password"]; reg.PasswordSalt = CreateSalt(6); reg.PasswordHash = CreatePasswordHash(password, reg.PasswordSalt); These statements are in both registration and login.... salt and hash during registration was eVSJE84W and 18DE22FED8C378DB7716B0E4B6C0BA54167315A2 During login it was 4YDIeARH and 12E3C1F4F4CFE04EA973D7C65A09A78E2D80AAC7..... Any suggestion.... public static string CreateSalt(int size) { //Generate a cryptographic random number. RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); byte[] buff = new byte[size]; rng.GetBytes(buff); // Return a Base64 string representation of the random number. return Convert.ToBase64String(buff); } public static string CreatePasswordHash(string pwd, string salt) { string saltAndPwd = String.Concat(pwd, salt); string hashedPwd = FormsAuthentication.HashPasswordForStoringInConfigFile( saltAndPwd, "sha1"); return hashedPwd; }

    Read the article

  • Oracle DBMS_PROFILER only shows Anonymous in the results tables

    - by Greg Reynolds
    I am new to DBMS_PROFILER. All the examples I have seen use a simple top-level procedure to demonstrate the use of the profiler, and from there get all the line numbers etc. I deploy all code in packages, and I am having great difficulty getting my profile session to populate the plsql_profiler_units with useful data. Most of my runs look like this: RUNID RUN_COMMENT UNIT_OWNER UNIT_NAME SECS PERCEN ----- ----------- ----------- -------------- ------- ------ 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler I have just embedded the calls to the dbms_profiler.start_profiler, flush_data and stop_profiler as per all the examples. The main difference is that my code is in a package, and calls in to other package. Do you have to profile every single stored procedure in your call stack? If so that makes this tool a little useless! I have checked http://www.dba-oracle.com/t_plsql_dbms_profiler.htm for hints, among other similar sites.

    Read the article

  • Reading in bytes produced by PHP script in Java to create a bitmap

    - by Kareem
    I'm having trouble getting the compressed jpeg image (stored as a blob in my database). here is the snippet of code I use to output the image that I have in my database: if($row = mysql_fetch_array($sql)) { $size = $row['image_size']; $image = $row['image']; if($image == null){ echo "no image!"; } else { header('Content-Type: content/data'); header("Content-length: $size"); echo $image; } } here is the code that I use to read in from the server: URL sizeUrl = new URL(MYURL); URLConnection sizeConn = sizeUrl.openConnection(); // Get The Response BufferedReader sizeRd = new BufferedReader(new InputStreamReader(sizeConn.getInputStream())); String line = ""; while(line.equals("")){ line = sizeRd.readLine(); } int image_size = Integer.parseInt(line); if(image_size == 0){ return null; } URL imageUrl = new URL(MYIMAGEURL); URLConnection imageConn = imageUrl.openConnection(); // Get The Response InputStream imageRd = imageConn.getInputStream(); byte[] bytedata = new byte[image_size]; int read = imageRd.read(bytedata, 0, image_size); Log.e("IMAGEDOWNLOADER", "read "+ read + " amount of bytes"); Log.e("IMAGEDOWNLOADER", "byte data has length " + bytedata.length); Bitmap theImage = BitmapFactory.decodeByteArray(bytedata, 0, image_size); if(theImage == null){ Log.e("IMAGEDOWNLOADER", "the bitmap is null"); } return theImage; My logging shows that everything has the right length, yet theImage is always null. I'm thinking it has to do with my content type. Or maybe the way I'm uploading?

    Read the article

  • Question about WeakHashMap

    - by michael
    Hi, In the Javadoc of "http://java.sun.com/j2se/1.4.2/docs/api/java/util/WeakHashMap.html", it said "Each key object in a WeakHashMap is stored indirectly as the referent of a weak reference. Therefore a key will automatically be removed only after the weak references to it, both inside and outside of the map, have been cleared by the garbage collector." And then Note that a value object may refer indirectly to its key via the WeakHashMap itself; that is, a value object may strongly refer to some other key object whose associated value object, in turn, strongly refers to the key of the first value object. But should not both Key and Value should be used weak reference in WeakHashMap? i.e. if there is low on memory, GC will free the memory held by the value object (since the value object most likely take up more memory than key object in most cases)? And if GC free the Value object, the Key Object can be free as well? Basically, I am looking for a HashMap which will reduce memory usage when there is low memory (GC collects the value and key objects if necessary). Is it possible in Java? Thank you.

    Read the article

  • Can I version dotfiles within a project without merging their history into the main line?

    - by istrasci
    I'm sure this title is fairly obscure. I'm wondering if there is some way in git to tell it that you want a certain file to use different versions of a file when moving between branches, but to overall be .gitignored from the repository. Here's my scenario: I've got a Flash Builder project (for a Flex app) that I control with git. Flex apps in Flash Builder projects create three files: .actionScriptProperties, .flexProperties, and .project. These files contain lots of local file system references (source folders, output folders, etc.), so naturally we .gitignore them from our repo. Today, I wanted to use a new library in my project, so I made a separate git branch called lib, removed the old version of the library and put in the new one. Unfortunately, this Flex library information gets stored in one of those three dot files (not sure which offhand). So when I had to switch back to the first branch (master) earlier, I was getting compile errors because master was now linked to the new library (which basically negated why I made lib in the first place). So I'm wondering if there's any way for me to continue to .gitignore these files (so my other developers don't get them), but tell git that I want it to use some kind of local "branch version" so I can locally use different versions of the files for different branches.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >