Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 417/465 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • jQuery logic firing twice from a Usercontrol when used in a jQueryUI modal dialog

    - by AaronS
    I have an asp.net usercontrol that I'm using to put a bunch of HTML and Jquery logic into to be shared on several pages. This usercontrol has some dropdown boxes loaded from json calls and has no added codebehind logic. When I use this usercontrol on a normal page it works perfectly fine, and no issues at all. However, when I wrap the usercontrol in a div, and use a jqueryUI modal dialog, everything in the usercontrol fires twice. Not only code in the initial $(document).ready(function() {});, but also every function is also fired twice when called. Debugging this in Visual Studio, I see that everything is first being called from the external JS file, and then again from a "script block" file that is somehow getting generated on the fly. This script block file isn't getting generated on a page that doesn't wrap the user control in a modal. The same happens if I use IISExpress or IIS7. The question is, why is this script block file getting created, and why is all my jQuery logic firing twice? --edit-- Here is the div: <div id="divMyDiv" title="MyDiv"> <uc1:MyUserControl runat="server" ID="MyUsercontrol" /> </div> Here is the modal logic that uses it: $("#divMyDiv").dialog({ autoOpen: false, height: 400, width: 400, modal: true, open: function (type, data) { $(this).parent().appendTo("form"); } }); Note: The problm still occurs, even if I remove the "open:" function. But, it does not occur if I remove the entire dialog block, so it is specific to this dialog call.

    Read the article

  • AIX 6.1 unable to build apache module

    - by user3715581
    In Aix 6.1/ Apache version 2.2.8 packaged as part of IBM WebSphere. We should have had mod_dumpio for this version but for some reasons IBM did not include it. So we are trying to build this module(in few many ways) but none of them worked. 1) Using apxs:- Failed with the "xlc_r" not found as compile option for libtool. So based on an article online, we changed to "gcc" and we have to remove -qmaxmem and -qHALT to make it work. Result we see a .lo created but LoadModule fails(unable to find .loader section). 2) Using gcc:- Command "gcc -fpic -DSHARED_MODULE -I -c mod_dumpio.c" After running this command, we can see .o file created and then we tried to execute "ld -Bshareable -o mod_dumpio.so mod_dumpio.o" whereas AIX complaining about -Bshareable so we tried using this command "gcc -shared -I -o mod_dumpio.so mod_dumpio.o" whereas the error was "libgcc_s" not found. Then we added -static-libgcc to the above command and it was not resolving .h file functions (Unknown symbols). From IBM AIX site, libgcc_s costs around $2k. We think our second approach may work but we dont know how to instruct gcc to look for .h files while building .so from .o. Any help is appreciated.

    Read the article

  • Implementing IEnumeralbe on Non-Listed Items

    - by Stacey
    I have a class that contains a static number of objects. This class needs to be frequently 'compared' to other classes that will be simple List objects. public partial class Sheet { public Item X{ get; set; } public Item Y{ get; set; } public Item Z{ get; set; } } the items are obviously not going to be "X" "Y" "Z", those are just generic names for example. The problem is that due to the nature of what needs to be done, a List won't work; even though everything in here is going to be of type Item. It is like a checklist of very specific things that has to be tested against in both code and runtime. This works all fine and well; it isn't my issue. My issue is iterating it. For instance I want to do the following... List<Item> UncheckedItems = // Repository Logic Here. UncheckedItems contains all available items; and the CheckedItems is the Sheet class instance. CheckedItems will contain items that were moved from Unchecked to Checked; however due to the nature of the storage system, items moved to Checked CANNOT be REMOVED from Unchecked. I simply want to iterate through "Checked" and remove anything from the list in Unchecked that is already in "Checked". So naturally, that would go like this with a normal list. foreach(Item item in Unchecked) { if( Checked.Contains(item) ) Unchecked.Remove( item ); } But since "Sheet" is not a 'List', I cannot do that. So I wanted to implement IEnumerable so that I could. Any suggestions? I've never implemented IEnumerable directly before and I'm pretty confused as to where to begin.

    Read the article

  • how to Add-in load again n again for every instance of an application?

    - by Ashwin Upadhyay
    I have one qus. i created one shared add-in in c#.net. This add-in is working fine. now i want this add-in is load again n again when any office application is opened. For e.g. when i open any MS word document then add-in is load for that and if after that i opened another MS word document without closing previously opened document then add-in is again load for newly opend MS word document. But when i opened MS word at first time the add-in is load and if i opened MS word again but add-in is already loaded. my requirement is like that-my add-in is worked backgroundly that is its work only to record the opening,closing time of the word document and how much time spend onto that word document and also the name of this document. But when i opened one word document then add-in is loaded for that and if againg opened new word document then becaz of previously opened document add-in is not load for that document remember that priviously opened document is not closed. but if i closed previously opened document then for new document add-in is load.

    Read the article

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • NServiceBus & MSMQ: How To Change the Default Permissions on the Queue?

    - by Amy T
    My team is on our first attempt at using NServiceBus (v2.0), using MSMQ as the backing storage. We're getting stuck on queue permissions. We're using it in a Web Forms application, where the user account the website runs under is not an administrator on the machine. When NServiceBus creates the MSMQ queue, it gives the local administrators group full control, and the local everyone and anonymous groups permissions to send messages. But then later, as part of initializing the queue, NServiceBus tries to read all of its messages. That's where we run into the permissions error. Since the website isn't running as an administrator, it's not allowed to read messages. How are other people dealing with this? Do your applications run as administrators? Or do you create the MSMQ queue in your code first, giving it the permissions you need, so that NServiceBus doesn't have to create it? Or is there a bit of configuration we're missing? Or are we likely writing our code that uses NServiceBus incorrectly to be running into this?

    Read the article

  • XSL(like) declarative language as MVC view over strongtyped model?

    - by Martin Kool
    As a huge XSL fan, I am very happy to use xsl as the view in our proprietary MVC framework on ASP.NET. Objects in the model are serialized under the hood using .NET's xml serializer, and we use quite atomic xsl templates to declare how each object or property should transform. For example: <xsl:template match="/Article"> <html> <body> <div class="article"> <xsl:apply-templates /> </div> </body> </html> </xsl:template> <xsl:template match="Article/Title"> <h1> <xsl:apply-templates /> </h1> </xsl:template> <xsl:template match="@*|text()"> <xsl:copy /> </xsl:template> This mechanism allows us to quickly override default matching templates, like having a template matching on the last item in a list, or the selected one, etc. Also, xsl extension objects in .NET allow us just the bit of extra grip that we need. Common shared templates can be split up and included. However Even though I can ignore the verbosity downside of xsl (because Visual Studio schema intellisense + snippets really is slick, praise to the VS-team), the downside of not having intellisense over strongtyped objects in the model is really something that's bugging me. I've seen ASP.NET MVC + user controls in action and really starting to love it, but I wonder; Is there a way of getting some sort of intellisense over XML that we're iterating over, or do you know of a language that offers the freedom and declarativeness of XSL but has the strongtype/intellisense benefits of say webforms/usercontrols/asp.net.mvc-view? (I probably know the answer: "no", and I'll find myself using Phil Haack's utterly cool mvc shizzle soon...)

    Read the article

  • Why can a public class not inherit from a less visible one?

    - by Dan Tao
    I apologize if this question has been asked before. I've searched SO somewhat and wasn't able to find it. I'm just curious what the rationale behind this design was/is. Obviously I understand that private/internal members of a base type cannot, nor should they, be exposed through a derived public type. But it seems to my naive thinking that the "hidden" parts could easily remain hidden while some base functionality is still shared and a new interface is exposed publicly. I'm thinking of something along these lines: Assembly X internal class InternalClass { protected virtual void DoSomethingProtected() { // Let's say this method provides some useful functionality. // Its visibility is quite limited (only to derived types in // the same assembly), but at least it's there. } } public class PublicClass : InternalClass { public void DoSomethingPublic() { // Now let's say this method is useful enough that this type // should be public. What's keeping us from leveraging the // base functionality laid out in InternalClass's implementation, // without exposing anything that shouldn't be exposed? } } Assembly Y public class OtherPublicClass : PublicClass { // It seems (again, to my naive mind) that this could work. This class // simply wouldn't be able to "see" any of the methods of InternalClass // from AssemblyX directly. But it could still access the public and // protected members of PublicClass that weren't inherited from // InternalClass. Does this make sense? What am I missing? }

    Read the article

  • Ruby on Rails How do I access variables of a model inside itself like in this example?

    - by banditKing
    I have a Model like so: # == Schema Information # # Table name: s3_files # # id :integer not null, primary key # owner :string(255) # notes :text # created_at :datetime not null # updated_at :datetime not null # last_accessed_by_user :string(255) # last_accessed_time_stamp :datetime # upload_file_name :string(255) # upload_content_type :string(255) # upload_file_size :integer # upload_updated_at :datetime # class S3File < ActiveRecord::Base #PaperClip methods attr_accessible :upload attr_accessor :owner Paperclip.interpolates :prefix do |attachment, style| I WOULD LIKE TO ACCESS VARIABLE= owner HERE- HOW TO DO THAT? end has_attached_file( :upload, :path => ":prefix/:basename.:extension", :storage => :s3, :s3_credentials => {:access_key_id => "ZXXX", :secret_access_key => "XXX"}, :bucket => "XXX" ) #Used to connect to users through the join table has_many :user_resource_relationships has_many :users, :through => :user_resource_relationships end Im setting this variable in the controller like so: # POST /s3_files # POST /s3_files.json def create @s3_file = S3File.new(params[:s3_file]) @s3_file.owner = current_user.email respond_to do |format| if @s3_file.save format.html { redirect_to @s3_file, notice: 'S3 file was successfully created.' } format.json { render json: @s3_file, status: :created, location: @s3_file } else format.html { render action: "new" } format.json { render json: @s3_file.errors, status: :unprocessable_entity } end end end Thanks, any help would be appreciated.

    Read the article

  • Devise password reset issue (new_user?)

    - by rabid_zombie
    When a user's email is inputted into the forgot password form and submitted, I am receiving an error saying login can't be blank. I looked around devise.en.yml for this error message, but can't seem to find it anywhere. Here is my views/devise/passwords/new.html.haml: %div.registration_page %h2 Forgot your password? = form_for(resource, :as => resource_name, :url => user_password_path, :html => { :method => :post, :id => 'forgot_pw_form', :class => 'forgot_pw' }) do |f| %div = f.email_field :email, :placeholder => 'Email', :autofocus => true, :autocomplete => 'off' %div.email_error.error %input.btn.btn-success{:type => 'submit', :value => 'Send Instructions'} = render "devise/shared/links" The form is posting to users/password like it should, but I noticed that my forgot password form attaches class = 'new_user'. Here is what my form displays: <form accept-charset='UTF-8' action='/users/password' class='new_user' id='forgot_pw_form' method='post' novalidate='novalidate'></form> My routes for devise (I have custom sessions and registrations controllers): devise_for :users, :controllers => {:sessions => 'sessions', :registrations => 'registrations'} How can I setup devise's forgot password functionality? Why am I receiving this error message and why is that class being added there? I've tried: Adding my own passwords controller and adding new routes for my custom controller. Same error Adding my own class and id to the form. This successfully changes the id and class of the form, but reverts back to class and id of new_user Thanks.

    Read the article

  • Recreating http request with cURL incl. files

    - by Toby
    I consistently get the error 'failed creating formpost data' from the below code, the same thing works perfectly on my local testing server, but on my shared host it throws the error. The sample part is just to simulate building the array with both files and non-file data. Essentially all I'm trying to do here is redirect the same http request to another server, but I'm running into so many troubles. $count=count($_FILES['photographs']['tmp_name']); $file_posts=array('samplesample' => 'ladeda'); for($i=0;$i<$count;$i++) { if(!empty($_FILES['photographs']['name'][$i])) { $fn = genRandomString(); $file_posts[$fn] = "@".$_FILES['photographs']['tmp_name'][$i]; } } $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,"http://myurl/wp-content/plugins/autol/rec.php"); curl_setopt($ch,CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"); curl_setopt($ch,CURLOPT_HEADER,TRUE); curl_setopt($ch,CURLOPT_POST,TRUE); curl_setopt($ch,CURLOPT_POSTFIELDS,$file_posts); curl_exec($ch); print curl_error($ch); curl_close($ch);

    Read the article

  • Which design pattern fits - strategy makes sense ?

    - by user554833
    --Bump *One desperate try to get someone's attention I have a simple database table that stores list of users who have subscribed to folders either by email OR to show up on the site (only on the web UI). In the storage table this is controlled by a number(1 - show on site 2- by email). When I am showing in UI I need to show a checkbox next to each of folders for which the user has subscribed (both email & on site). There is a separate table which stores a set of default subscriptions which would apply to each user if user has not expressed his subscription. This is basically a folder ID and a virtual group name. But, Email subscriptions do not count for applying these default groups. So if no "on site" subscription apply default group. Thats the rule. How about a strategy pattern here (Pseudo code) Interface ISubscription public ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithoutDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionOnlyDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) does this even make sense? I would be more than glad for receive any criticism / help / notes. I am learning. Cheers

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • Chrome Extension - Console Log not firing

    - by coffeemonitor
    I'm starting to learn to make my own Chrome Extensions, and starting small. At the moment, I'm switching from using the alert() function to console.log() for a cleaner development environment. For some reason, console.log() is not displaying in my chrome console logs. However, the alert() function is working just fine. Can someone review my code below and perhaps tell me why console.log() isn't firing as expected? manifest.json { "manifest_version": 2, "name": "Sandbox", "version": "0.2", "description": "My Chrome Extension Playground", "icons": { "16": "imgs/16x16.png", "24": "imgs/24x24.png", "32": "imgs/32x32.png", "48": "imgs/48x48.png" }, "background": { "scripts": ["js/background.js"] }, "browser_action": { "default_title": "My Fun Sandbox Environment", "default_icon": "imgs/16x16.png" }, "permissions": [ "background", "storage", "tabs", "http://*/*", "https://*/*" ] } js/background.js function click(e) { alert("this alert certainly shows"); console.log("But this does not"); } // Fire a function, when icon is clicked chrome.browserAction.onClicked.addListener(click); As you can see, I kept it very simple. Just the manifest.json and a background.js file with an event listener, if the icon in the toolbar is clicked. As I mentioned, the alert() is popping up nicely, while the console.log() appears to be ignored.

    Read the article

  • What are the linkage of the following functions?

    - by Derui Si
    When I was reading the c++ 03 standard (7.1.1 Storage class specifiers [dcl.stc]), there are some examples as below, I'm not able to tell how the linkage of each successive declarations is determined? Could anyone help here? Thanks in advance! static char* f(); // f() has internal linkage char* f() { /* ... */ } // f() still has internal linkage char* g(); // g() has external linkage static char* g() { /* ... */ } // error: inconsistent linkage void h(); inline void h(); // external linkage inline void l(); void l(); // external linkage inline void m(); extern void m(); // external linkage static void n(); inline void n(); // internal linkage static int a; // a has internal linkage int a; // error: two definitions static int b; // b has internal linkage extern int b; // b still has internal linkage int c; // c has external linkage static int c; // error: inconsistent linkage extern int d; // d has external linkage static int d; // error: inconsistent linkage UPD: Additionally, how can I understand the statement in the standard, " The linkages implied by successive declarations for a given entity shall agree. That is, within a given scope, each declaration declaring the same object name or the same overloading of a function name shall imply the same linkage. Each function in a given set of overloaded functions can have a different linkage, however."

    Read the article

  • How to handle javascript & css files across a site?

    - by Industrial
    Hi everybody, I have had some thoughts recently on how to handle shared javascript and css files across a web application. In a current web application that I am working on, I got quite a large number of different javascripts and css files that are placed in an folder on the server. Some of the files are reused, while others are not. In a production site, it's quite stupid to have a high number of HTTP requests and many kilobytes of unnecessary javascript and redundant css being loaded. The solution to that is of course to create one big bundled file per page that only contains the necessary information, which then is minimized and sent compressed (GZIP) to the client. There's no worries to create a bundle of javascript files and minimize them manually if you were going to do it once, but since the app is continuously maintained and things do change and develop, it quite soon becomes a headache to do this manually while pushing out new updates that features changes to javascripts and/or css files to production. What's a good approach to handle this? How do you handle this in your application?

    Read the article

  • MySQL Query Where Column Like Column

    - by shmeeps
    I'm working on a small project that involves grabbing a list of contacts which are stored for each group. Essentially, the database is set up so that each group has a primary and secondary contact stored as, unsurprisingly, Group.Primary and Group.Secondary. The objective is to pull every Primary and Secondary contact for each Group and display them in a sortable table. I have the sortable table all worked out, but I have come across a small problem. Each primary and secondary field can have more than one contact separated by a comma. For instance, if Primary contained 123,256 , it would need to pull both Contacts with IDs 123 and 256. I had intended to use a query formatted like this: SELECT * FROM Group G, Contacts C WHERE G.Primary LIKE %C.ID% OR G.Secondary LIKE %C.ID% so that I could just skip the comma part, but I can't seem to find a working query for this. My question to you is, am I just overlooking something here? Is there a simple query that would let me do this? Or am I better off getting the groups and contacts separately, and combine the two later. I think the former is a little easier to understand when read, which is a plus as this is a shared project, but if that is not possible I will do the latter. This code is simplified, but it gets the point across.

    Read the article

  • Remove php extension from URL on Windows hosting account using web.config

    - by asprin
    I've searched before asking this question. The answered ones were related to Linux hosting account and the ones with Windows hosting account didn't match what I was looking for. As you might have guessed, I've a Windows shared hosting account with godaddy. My aim was to remove the '.php' extension from the url. After researching I found that .htaccess would do exactly what I want. But I also found that .htaccess doesn't work in Windows environment and that I'll need a web.config file to do the same task. Now I know there are modules through which the code can be generated, but the problem is I don't know how to get them installed on my hosting account. I don't want to go through the process of contacting the people over at godaddy and hence I'm looking to solve this on my own. What I'm looking for is a web.config equivalent of .htaccess This is what I'm trying to achieve: Current URL : www.abcdef.com/contact.php Desired URL : www.abcdef.com/contact Any help would be greatly appreciated. Thanks, Nisar.

    Read the article

  • getting page info from a webpage while sharing facebook style

    - by user556167
    Hi all. I am writing an app to share a page to friends when browsing. To this end, I can get the url of the page to my app and display it on the screen. Could anyone tell me, how to improve this, to show the headline of the page and the picture of a page as it appears when a page is being shared to friends on facebook. Here is the code to get and display the url of the page: Intent intent = getIntent(); String action = intent.getAction(); String type = intent.getType(); String mNewLinkUrl=""; if (action != null && type != null && Intent.ACTION_SEND.equals(action) && "text/plain".equals(type)){ mNewLinkUrl = intent.getStringExtra(Intent.EXTRA_TEXT); } TextView ptext = (TextView) findViewById(R.id.pagetext); ptext.append("\n"+mNewLinkUrl); Any insight will be appreciated. //update 1 found a method: use Jsoup to download the html as a Document. And use Elements and Element to get the data. For eg, the following is the code to get the links in imports in the html page: Elements imports = doc.select("link[href]"); for (Element link : imports) { Log.d(TAG,"link.tagName()"+link.tagName()+"link.attr(abs:href)"+link.attr("abs:href") +"link.attr(rel)"+link.attr("rel")); } But I am not able to figure out what attributes facebook exactly gets. Can anyone help me in this? Thank you.

    Read the article

  • On android how would I go about creating a prompt for an app that requires a user to enter a username before fully launching the app?

    - by racl101
    I'll preface this with I'm really new to working on Android. So I have to work on an existing app and create a screen that prompts the user for a username if one isn't entered then it won't launch the currently existing main activity (i.e. it won't fully launch the app. Instead it will just sit on that screen until a username is entered.) I suppose that it's tantamount to a web app login page in that it will not let a user past that page if the user is not authenticated and authorized. However, there's no authentication nor authorization in this app. Instead, on first run a user must simply register with a username and that username gets saved to the database and for any subsequent runs the app's main activity will just start because a user has registered to that phone. In fact, the app will not prompt the user for their name again unless the app gets deleted with all its data and reinstalled again. This implicitly means that I have to save the user's name in the database or some other kind of storage. So I was wondering what would be the best way of doing this in an app with an existing main activity? Should I try to accomplish this on that existing main activity or create a new activity to display this prompt screen and, in effect, block the main activity from running until the user enters their name? Any tutorials or links would be helpful and thank you in advance for any help.

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • What kind of online hosting do I need for a WCF-based service?

    - by mafutrct
    First of all, I'm not sure if SO is the right place to ask. Please migrate me if needed. I would like to host a WCF-based service so it is available for everyone. While hosting it on my personal, local servers succeeded, I would prefer to move it to an external service provider for various reasons. I'll be blunt: I have no clue about hosting providers. I know there are webhosters, virtual and root servers and several other services. What I would like to know is what kind of hosting I need in my case. I understand that a root server would easily fulfill my requirements, but that is not exactly cheap. The program I'd like to run on the server requires .NET 4, preferably on a windows machine. Access to a folder in the file system is much appreciated (1 GB storage is enough by far). Communication with clients (in form of an applications written in .NET) via opening a port on the server. Traffic is low (<<1 GB/month?) There is no website. Having the provider perform updates would be nice. My understanding is that a virtual server would be a possible solution. Prices seem start at around 5€/month, which is ok for me. However, I read that for these cheap solutions RAM is severely limited (~400 MB), and I'm not confident that is enough to run windows and a .NET application.

    Read the article

  • How to make chrome.tabs.update works with content script

    - by user1673772
    I work on a little extension on Google Chrome, I want to create a new tab, go on the url "sample"+i+".com", launch a content script on this url, update the current tab to "sample"+(i+1)+".com", and launch the same script. I looked the Q&A available on stackoverflow and I google it but I didn't found a solution who works. This is my actually code of background.js (it works), it creates two tabs (i=21 and i=22) and load my content script for each url, when I tried to do a chrome.tabs.update Chrome launchs directly a tab with i = 22 (and the script works only one time) : function extraction(tab) { for (var i =21; i<23;i++) { chrome.storage.sync.set({'extraction' : 1}, function() {}); //for my content script chrome.tabs.create({url: "http://example.com/"+i+".html"}, function() {}); } } chrome.browserAction.onClicked.addListener(function(tab) {extraction(tab);}); If anyone can help me, the content script and manifest.json are not the problem. I want to make that 15000 times so I can't do otherwise. Thank you.

    Read the article

  • What are your suggestions for best practises for regular data updates in a website database?

    - by bboyle1234
    My shared-hosting asp.net website must automatically run data update routines at regular times of day. Once it has finished running certain update routines, it can run update routines that are dependent on the previous updates. I have done this type of work before, using quite complicated setups. Some features of the framework I created are: A cron job from another server makes a request which starts a data update routine on the main server Each updater is loaded from web.config Each updater overrides a "canRunUpdate" method that determines whether its dependencies have finished updating Each updater overrides a "hasFinishedUpdate" method Each updater overrides a "runUpdate" method Updaters start and run in parallel threads The initial request from the cron job server started each updater in its own thread and then ended. As a result, the threads containing the updaters would be terminated before the updaters were finished. Therefore I had to give the updaters the ability to save partial results and continue the update job next time they are started up. As a result, the cron server had to call the updater many times to ensure the job is done. Sometimes the cron server would continue making update requests long after all the updates were completed. Sometimes the cron server would finish calling the update requests and leave some updates uncompleted. It's not the best system. I'm looking for inspiration. Any ideas please? Thank you :)

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >