Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 212/701 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Common lisp, CFFI, and instantiating c structs

    - by andrew
    Hi, I've been on google for about, oh, 3 hours looking for a solution to this "problem." I'm trying to figure out how to instantiate a C structure in lisp using CFFI. I have a struct in c: struct cpVect{cpFloat x,y;} Simple right? I have auto-generated CFFI bindings (swig, I think) to this struct: (cffi:defcstruct #.(chipmunk-lispify "cpVect" 'classname) (#.(chipmunk-lispify "x" 'slotname) :double) (#.(chipmunk-lispify "y" 'slotname) :double)) This generates a struct "VECT" with slots :X and :Y, which foreign-slot-names confirms (please note that I neither generated the bindings or programmed the C library (chipmunk physics), but the actual functions are being called from lisp just fine). I've searched far and wide, and maybe I've seen it 100 times and glossed over it, but I cannot figure out how to create a instance of cpVect in lisp to use in other functions. Note the function: cpShape *cpPolyShapeNew(cpBody *body, int numVerts, cpVect *verts, cpVect offset) Takes not only a cpVect, but also a pointer to a set of cpVects, which brings me to my second question: how do I create a pointer to a set of structs? I've been to http://common-lisp.net/project/cffi/manual/html_node/defcstruct.html and tried the code, but get "Error: Unbound variable: PTR" (I'm in Clozure CL), not to mention that looks to only return a pointer, not an instance. I'm new to lisp, been going pretty strong so far, but this is the first real problem I've hit that I can't figure out. Thanks!

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • MS Query Analizer/Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analizer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

  • Java: Implement own message queue (threadsafe)

    - by derMax
    The task is to implement my own messagequeue that is thread safe. My approach: public class MessageQueue { /** * Number of strings (messages) that can be stored in the queue. */ private int capacity; /** * The queue itself, all incoming messages are stored in here. */ private Vector<String> queue = new Vector<String>(capacity); /** * Constructor, initializes the queue. * * @param capacity The number of messages allowed in the queue. */ public MessageQueue(int capacity) { this.capacity = capacity; } /** * Adds a new message to the queue. If the queue is full, it waits until a message is released. * * @param message */ public synchronized void send(String message) { //TODO check } /** * Receives a new message and removes it from the queue. * * @return */ public synchronized String receive() { //TODO check return "0"; } } If the queue is empty and I call remove(), I want to call wait() so that another thread can use the send() method. Respectively, I have to call notifyAll() after every iteration. Question: Is that possible? I mean does it work that when I say wait() in one method of an object, that I can then execute another method of the same object? And another question: Does that seem to be clever?

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • Case Insensitive Ternary Search Tree

    - by Yan Cheng CHEOK
    I had been using Ternary Search Tree for a while, as the data structure to implement a auto complete drop down combo box. Which means, when user type "fo", the drop down combo box will display foo food football The problem is, my current used of Ternary Search Tree is case sensitive. My implementation is as follow. It had been used by real world for around 1++ yeas. Hence, I consider it as quite reliable. My Ternary Search Tree code However, I am looking for a case insensitive Ternary Search Tree, which means, when I type "fo", the drop down combo box will show me foO Food fooTBall Here are some key interface for TST, where I hope the new case insentive TST may have similar interface too. /** * Stores value in the TernarySearchTree. The value may be retrieved using key. * @param key A string that indexes the object to be stored. * @param value The object to be stored in the tree. */ public void put(String key, E value) { getOrCreateNode(key).data = value; } /** * Retrieve the object indexed by key. * @param key A String index. * @return Object The object retrieved from the TernarySearchTree. */ public E get(String key) { TSTNode<E> node = getNode(key); if(node==null) return null; return node.data; } An example of usage is as follow. TSTSearchEngine is using TernarySearchTree as the core backbone. Example usage of Ternary Search Tree // There is stock named microsoft and MICROChip inside stocks ArrayList. TSTSearchEngine<Stock> engine = TSTSearchEngine<Stock>(stocks); // I wish it would return microsoft and MICROCHIP. Currently, it just return microsoft. List<Stock> results = engine.searchAll("micro");

    Read the article

  • Protecting an Application's Memory From Tampering

    - by Changeling
    We are adding AES 256 bit encryption to our server and client applications for encrypting the TCP/IP traffic containing sensitive information. We will be rotating the keys daily. Because of that, the keys will be stored in memory with the applications. Key distribution process: Each server and client will have a list of initial Key Encryption Key's (KEK) by day If the client has just started up or the server has just started up, the client will request the daily key from the server using the initial key. The server will respond with the daily key, encrypted with the initial key. The daily key is a randomly generated set of alphanumeric characters. We are using AES 256 bit encryption. All subsequent communications will be encrypted using that daily key. Nightly, the client will request the new daily key from the server using the current daily key as the current KEK. After the client gets the new key, the new daily key will replace the old daily key. Is it possible for another bad application to gain access to this memory illegally or is this protected in Windows? The key will not be written to a file, only stored in a variable in memory. If an application can access the memory illegally, how can you protect the memory from tampering? We are using C++ and XP (Vista/7 may be an option in the future so I don't know if that changes the answer).

    Read the article

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • C# parameters by reference and .net garbage collection

    - by Yarko
    I have been trying to figure out the intricacies of the .NET garbage collection system and I have a question related to C# reference parameters. If I understand correctly, variables defined in a method are stored on the stack and are not affected by garbage collection. So, in this example: public class Test { public Test() { } public int DoIt() { int t = 7; Increment(ref t); return t; } private int Increment(ref int p) { p++; } } the return value of DoIt() will be 8. Since the location of t is on the stack, then that memory cannot be garbage collected or compacted and the reference variable in Increment() will always point to the proper contents of t. However, suppose we have: public class Test { private int t = 7; public Test() { } public int DoIt() { Increment(ref t); return t; } private int Increment(ref int p) { p++; } } Now, t is stored on the heap as it is a value of a specific instance of my class. Isn't this possibly a problem if I pass this value as a reference parameter? If I pass t as a reference parameter, p will point to the current location of t. However, if the garbage collector moves this object during a compact, won't that mess up the reference to t in Increment()? Or does the garbage collector update even references created by passing reference parameters? Do I have to worry about this at all? The only mention of worrying about memory being compacted on MSDN (that I can find) is in relation to passing managed references to unmanaged code. Hopefully that's because I don't have to worry about any managed references in managed code. :)

    Read the article

  • Search for a date between given ranges - Lotus

    - by Kris.Mitchell
    I have been trying to work out what is the best way to search for gather all of the documents in a database that have a certain date. Originally I was trying to use FTsearch or search to move through a document collection, but I changed over to processing a view and associated documents. My first question is what is the easiest way to spin through a set of documents and find if a date stored in the documents is greater than or less than a specified date? So, to continue working I implemented the following code. If (doc.creationDate(0) > cdat(parm1)) And (doc.creationDate(0) < CDat(parm2)) then ... end if but the results are off Included! Date:3/12/10 11:07:08 P1:3/1/10 P2: 3/5/10 Included! Date:3/13/10 9:15:09 P1:3/1/10 P2: 3/5/10 Included! Date:3/17/10 16:22:07P1:3/1/10 P2: 3/5/10 You can see that the date stored in the doc is not between P1 and P2. BUT! it does limit the documents with a date less than P1 correctly. So I won't get a result for a document with a date less than 3/1/10 If there isn't a better way than the if statement, can someone help me understand why the two examples from above are included?

    Read the article

  • LINQ Join on Dictionary<K,T> where only K is changed.

    - by Stacey
    Assuming type TModel, TKey, and TValue. In a dictionary where KeyValuePair is declared, I need to merge TKey into a separate model of KeyValuePair where TKey in the original dictionary refers to an identifier in a list of TModel that will replace the item in the Dictionary. public TModel { public Guid Id { get; set; } // ... } public Dictionary<Guid, TValue> contains the elements. TValue relates to the TModel. The serialized/stored object is like this.. public SerializedModel { public Dictionary<Guid,TValue> Items { get; set; } } So I need to construct a new model... KeyValueModel { public Dictionary<TModel, TValue> { get; set; } } KeyValueModel kvm = = (from tModels in controller.ModelRepository.List<Models.Models>() join matchingModels in storedInformation.Items on tModels.Id equals matchingModels select tModels).ToDictionary( c => c.Id, storedInformation.Items.Values ) This linq query isn't doing what I'm wanting, but I think I'm at least headed in the right direction. Can anyone assist with the query? The original object is stored as a KeyValuePair. I need to merge the Guid Keys in the Dictionary to their actual related objects in another object (List) so that the final result is KeyValuePair. And as for what the query is not doing for me... it isn't compiling or running. It just says that "Join is not valid".

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • Adding XOR function to bigint library

    - by Jason Gooner
    Hi, I'm using this Big Integer library for Javascript: http://www.leemon.com/crypto/BigInt.js and I need to be able to XOR two bigInts together and sadly the library doesn't include such a function. The library is relatively simple so I don't think it's a huge task, just confusing. I've been trying to hack one together but not having much luck, would be very grateful if someone could lend me a hand. This is what I've attempted (might be wrong). But im guessing the structure is going to be quite similar to some of the other functions in there. function xor(x, y) { var c, k, i; var result = new Array(0); // big int for result k=x.length>y.length ? x.length : y.length; // array length of the larger num // Make sure result is the correct array size? maybe: result = expand(result, k); // ? for (c=0, i=0; i < k; i++) { // Do some xor here } // return the bigint xor result return result; } What confuses me is I don't really understand how it stores numbers in the array blocks for the bigInt. I don't think it's a case of simply bigintC[i] = bigintA[i] ^ bigintB[i], then most other functions have some masking operation at the end that I don't understand. I would really appreciate any help getting this working. Thanks

    Read the article

  • Php index page utitlizing an idea for a superswitch...

    - by Matt
    I dont know if this is a great idea...or a crap one. But I was thinking I may use one page to display all my pages using includes. Here is what my index.php would look like...on the functions include there is a function called "superSwitch" which will determine what requested page will be included....for instance if I do a get ?a=a it will goto the function superSwitch(a) superSwitch will take it and associate it with (login.php) then respond with such... here is the code for the index.php...please let me know if this makes sense and might work, or should I just stick to long blocks of code (which is why I am trying this because I hate long pages full of code...) of course as you can tell it is not actually including anything yet...the print is for debugging purposes. :) Thanks, Matt <?php //includes Functions include_once('inc/func.inc.php'); //set superget variable $superget = @$_GET['a']; //check if superget is set or null if (!$superget) { echo "Nothing Requested :)"; } else { //sanitizes the superget request $supergetr = supergetSanitize($superget) //uses the result "good" or "nogood" to determine what happens if ( $supergetr == "good" ) { //pulls superSwitch value of the request $ssresult = superSwitch($superget); print_r ($ssresult); } //if the sanitize is nogood else { //the superSwitch is instructed to respond with a 404 page $superget = "404" $ssresult = superSwitch($superget); print_r ($ssresult); } } ?>

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • Using unset member variables within a class or struct

    - by Doug Kavendek
    It's pretty nice to catch some really obvious errors when using unset local variables or when accessing a class or struct's members directly prior to initializing them. In visual studio 2008 you get an "uninitialized local variable used" warning at compile-time and get a run-time check failure at the point of access when debugging. However, if you access an uninitialized struct's member variable through one of its functions, you don't get any warnings or assertions. Obviously the easiest solution is don't do that, but nobody's perfect. For example: struct Test { float GetMember() const { return member; } float member; }; Test test; float f1 = test.member; // Raises warning, asserts in VS debugger at runtime float f2 = test.GetMember(); // No problem, just keeps on going This surprised me, but it makes some sense -- the compiler can't assume calling a function on an unused struct is an error, or how else would you initialize or construct it? And anything fancier just quickly brings up so many other complications that it makes sense that it wouldn't bother classifying which functions are ok to call and when, especially just as a debugging help. I know I can set up my own assertions or error checking within the class itself, but that can complicate some simpler structs. Still, it would seem like within the context of the function call, wouldn't it know insides GetMember() that member wasn't initialized yet? I'm assuming it's not only relying on static compile-time deduction, given the Run-Time Check Failure #3 it raises during execution, so based on my current understanding of it it would seem reasonable for the same checks to apply. Is this just a limitation of this specific compiler/debugger (Visual Studio 2008), or more tied to how C++ works?

    Read the article

  • Is extending a base class with non-virtual destructor dangerous in C++

    - by Akusete
    Take the following code class A { }; class B : public A { }; class C : public A { int x; }; int main (int argc, char** argv) { A* b = new B(); A* c = new C(); //in both cases, only ~A() is called, not ~B() or ~C() delete b; //is this ok? delete c; //does this line leak memory? return 0; } when calling delete on a class with a non-virtual destructor with member functions (like class C), can the memory allocator tell what the proper size of the object is? If not, is memory leaked? Secondly, if the class has no member functions, and no explicit destructor behaviour (like class B), is everything ok? I ask this because I wanted to create a class to extend std::string, (which I know is not recommended, but for the sake of the discussion just bear with it), and overload the +=,+ operator. -Weffc++ gives me a warning because std::string has a non virtual destructor, but does it matter if the sub-class has no members and does not need to do anything in its destructor? -- FYI the += overload was to do proper file path formatting, so the path class could be used like class path : public std::string { //... overload, +=, + //... add last_path_component, remove_path_component, ext, etc... }; path foo = "/some/file/path"; foo = foo + "filename.txt"; //and so on... I just wanted to make sure someone doing this path* foo = new path(); std::string* bar = foo; delete bar; would not cause any problems with memory allocation

    Read the article

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • Java swing app can't find image

    - by KáGé
    Hello, I'm making a torpedo game for school in java with swing gui, please see the zipped source HERE. I use custom button icons and mouse cursors of images stored in the /bin/resource/graphics/default folder's subfolders, where the root folder is the program's root folder (it will be the root in the final .jar as well I suppose) which apart from "bin" contains a "main" folder with all the classes. The relative path of the resources is stored in MapStruct.java's shipPath and mapPath variables. Now Battlefield.java's PutPanel class finds them all right and sets up its buttons' icons fine, but every other class fail to get their icons, e.g. Table.java's setCursor, which should set the mouse cursor for all its elements for the selected ship's image or Field.java's this.button.setIcon(icon); in the constructor, which should set the icon for the buttons of the "water". I watched with debug what happens, and the images stay null after loading, though the paths seem to be correct. I've also tried to write a test file in the image folder but the method returns a filenotfound exception. I've tried to get the path of the class to see if it runs from the supposed place and it seems it does, so I really can't find the problem now. Could anyone please help me? Thank you.

    Read the article

  • Handle multiple db updates from c# in SQL Server 2008

    - by joeriks
    I like to find a way to handle multiple updates to a sql db (with one singe db roundtrip). I read about table-valued parameters in SQL Server 2008 http://www.codeproject.com/KB/database/TableValueParameters.aspx which seems really useful. But it seems I need to create both a stored procedure and a table type to use it. Is that true? Perhaps due to security? I would like to run a text query simply like this: var sql = "INSERT INTO Note (UserId, note) SELECT * FROM @myDataTable"; var myDataTable = ... some System.Data.DataTable ... var cmd = new System.Data.SqlClient.SqlCommand(sql, conn); var param = cmd.Parameters.Add("@myDataTable", System.Data.SqlDbType.Structured); param.Value=myDataTable; cmd.ExecuteNonQuery(); So A) do I have to create both a stored procedure and a table type to use TVP's? and B) what alternative method is recommended to send multiple updates (and inserts) to SQL Server?

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Scheme: what are the benefits of letrec?

    - by Ixmatus
    While reading "The Seasoned Schemer" I've begun to learn about letrec. I understand what it does (can be duplicated with a Y-Combinator) but the book is using it in lieu of recurring on the already defined function operating on arguments that remain static. An example of an old function using the defined function recurring on itself (nothing special): (define (substitute new old lat) (cond ((null? l) '()) ((eq? (car l) old) (cons new (substitute new old (cdr l)))) (else (cons (car l) (substitute new old (cdr l)))))) Now for an example of that same function but using letrec: (define (substitute new old lat) (letrec ((replace (lambda (l) (cond ((null? l) '()) ((eq? (car l) old) (cons new (replace (cdr l)))) (else (cons (car l) (replace (cdr l)))))))) (replace lat))) Aside from being slightly longer and more difficult to read I don't know why they are rewriting functions in the book to use letrec. Is there a speed enhancement when recurring over a static variable this way because you don't keep passing it?? Is this standard practice for functions with arguments that remain static but one argument that is reduced (such as recurring down the elements of a list)? Some input from more experienced Schemers/LISPers would help!

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >