Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 347/371 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • How do I detect server status in a port scanner java implementation

    - by akz
    I am writing a port scanner in Java and I want to be able to distinct the following 4 use cases: port is open port is open and server banner was read port is closed server is not live I have the following code: InetAddress address = InetAddress.getByName("google.com"); int[] ports = new int[]{21, 22, 23, 80, 443}; for (int i = 0; i < ports.length; i++) { int port = ports[i]; Socket socket = null; try { socket = new Socket(address, port); socket.setSoTimeout(500); System.out.println("port " + port + " open"); BufferedReader reader = new BufferedReader( new InputStreamReader(socket.getInputStream())); String line = reader.readLine(); if (line != null) { System.out.println(line); } socket.close(); } catch (SocketTimeoutException ex) { // port was open but nothing was read from input stream ex.printStackTrace(); } catch (ConnectException ex) { // port is closed ex.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { if (socket != null && !socket.isClosed()) { try { socket.close(); } catch (Exception e) { e.printStackTrace(); } } } } The problem is that I get a ConnectionException both when the port is closed and the server cannot be reached but with a different exception message: java.net.ConnectException: Connection timed out: connect when the connection was never established and java.net.ConnectException: Connection refused: connect when the port was closed so I cannot make the distinction between the two use cases without digging into the actual exception message. Same thing happens when I try a different approach for the socket creation. If I use: socket = new Socket(); socket.setSoTimeout(500); socket.connect(new InetSocketAddress(address, port), 1000); I have the same problem but with the SocketTimeoutException instead. I get a java.net.SocketTimeoutException: Read timed out if port was open but there was no banner to be read and java.net.SocketTimeoutException: connect timed out if server is not live or port is closed. Any ideas? Thanks in advance!

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

  • web design PSD to html -> more direct ways?

    - by Assembler
    At work I see one colleague designing a site in Photoshop/Fireworks, I see another taking this data, slicing it up and using Dreamweaver to rebuild the same from scratch. It seems like too much mucking around! I know that Photoshop can output a tables based HTML, and Fireworks will create divs with absolute positioning; neither appear to be very helpful. Admittedly, I haven't tried much of (DW/FW) (CS4/CS3) since becoming a programmer, so I don't know if new versions are addressing this work flow issue, but are we still double handling things? Can we attach some sort of layout metadata (this is a rollover button, this will be a SWF, this will be text, this logo will hide "xyz" <h1> text etc) to slices to aid in layout generation? are there some secret tools which assist in this conversion process? Or are we still restricted to doing things by hand? The frustration continues when said hand built page needs to be reworked again to fit Smarty Templates/Wordpress/generic CMS. I acknowledge that designers need to be free of systems to be able to do whatever, but most conventional sites have: a header with navigation a sidebar with more links the main content part maybe another sidebar a footer Given the similarity of a lot of components, shouldn't there be a more systematic approach to going from sliced designs to functional HTML? Or am I over-simplifying things? -edit- Mmmmm.... I suppose I will accept an answer, but they weren't really what I was looking for. It just seems like designing the DOM is a bit of holy grail ("It's only a model!"), and maybe with all the "groovy" things you can do with HTML and Javascript, it would be mighty hard work, but with a set of constraints (that 960 stuff looks interesting), some well designed reset style sheets and a bit of... fairy dust? we should be able to improve the work flow. Photoshop's tables by themselves are pretty much useless, I agree, but surely we can take this data, and then select a group of cells and say "right, this is a text div, overflow:auto" or "these cells are an image block, style it with the same height/width as the selected area". Admittedly here at work there are other elephants in the room that need to make their formal introductions to management, but some parts of the designpage workflow seem... uneducated at best.

    Read the article

  • How do you programmatically set a Style on a View?

    - by Greg
    I would like to do something like this: <Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="wrap_cotent" style="@style/SubmitButtonType" /> But in code The xml approach works fine provided that SubmitButtonType is defined. Now what I assume happens is that the appt parser runs through this xml, generates an AttributeSet. That AttributeSet when passed to context/theme#obtainStyledAttributes() will have the style ref mask anything that is not written inline in this tag. Great that's fine! Now how do we do this programmatically. Button, as well as other View types, has a constructor that has the form: <Widget>(Context context, AttributeSet set, int defStyle). So I thought this would work. Button button = new Button(context, null, R.style.SubmitButtonType); However, I am finding that defStyle is badly documented as it really should be written to be a resourceId to an attribute (from R.attrs) that will be passed to obtainStyledAttributes() as the attribute resource, and not the style resource. After looking at the code, all the view implementations seem to pass 0 as the styleRef. I don't see the harm in having it passed as both the attr and the style resource (more flexible and negligible overhead) However I might be approaching this all wrong. How do you do this in code then other than by setting each individual element of the style to the specific widget you want to style (only possible by looking a the code to see what param maps to which method or set of methods). The only way I have found to do this is: <declare-styleable> <attr name="totallyAdhoc_attribute_just_for_this_case" format="reference"> </declare-styleable> <style name="MyAlreadyExistantTheme" > ... ... <item name="totallyAdhoc_attribute_just_for_this_case">@style/SubmitButtonType</item> </style> And instead of passing R.style.SubmitButtonType as defStyle, I pass the new R.attr.totallyAdhoc_attribute_just_for_this_case. Button button = new Button(context, null, R.attr.totallyAdhoc_attribute_just_for_this_case); This works but sounds way too complicated.

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

  • What IPC method should I use between Firefox extension and C# code running on the same machine?

    - by Rory
    I have a question about how to structure communication between a (new) Firefox extension and existing C# code. The firefox extension will use configuration data and will produce other data, so needs to get the config data from somewhere and save it's output somewhere. The data is produced/consumed by existing C# code, so I need to decide how the extension should interact with the C# code. Some pertinent factors: It's only running on windows, in a relatively controlled corporate environment. I have a windows service running on the machine, built in C#. Storing the data in a local datastore (like sqlite) would be useful for other reasons. The volume of data is low, e.g. 10kb of uncompressed xml every few minutes, and isn't very 'chatty'. The data exchange can be asynchronous for the most part if not completely. As with all projects, I have limited resources so want an option that's relatively easy. It doesn't have to be ultra-high performance, but shouldn't add significant overhead. I'm planning on building the extension in javascript (although could be convinced otherwise if really necessary) Some options I'm considering: use an XPCOM to .NET/COM bridge use a sqlite db: the extension would read from and save to it. The c# code would run in the service, populating the db and then processing data created by the service. use TCP sockets to communicate between the extension and the service. Let the service manage a local data store. My problem with (1) is I think this will be tricky and not so easy. But I could be completely wrong? The main problem I see with (2) is the locking of sqlite: only a single process can write data at a time so there'd be some blocking. However, it would be nice generally to have a local datastore so this is an attractive option if the performance impact isn't too great. I don't know whether (3) would be particularly easy or hard ... or what approach to take on the protocol: something custom or http. Any comments on these ideas or other suggestions? UPDATE: I was planning on building the extension in javascript rather than c++

    Read the article

  • Passing ViewModel for backbone.js from MVC3 Server-Side

    - by Roman
    In ASP.NET MVC there is Model, View and Controller. MODEL represents entities which are stored in database and essentially is all the data used in a application (for example, generated by EntityFramework, "DB First" approach). Not all data from model you want to show in the view (for example, hashs of passwords). So you create VIEW MODEL, each for every strongly-typed-razor-view you have in application. Like this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MyProject.ViewModels.SomeController.SomeAction { public class ViewModel { public ViewModel() { Entities1 = new List<ViewEntity1>(); Entities2 = new List<ViewEntity2>(); } public List<ViewEntity1> Entities1 { get; set; } public List<ViewEntity2> Entities2 { get; set; } } public class ViewEntity1 { //some properties from original DB-entity you want to show } public class ViewEntity2 { } } When you create complex client-side interfaces (I do), you use some pattern for javascript on client, MVC or MVVM (I know only these). So, with MVC on client you have another model (Backbone.Model for example), which is third model in application. It is a bit much. Why don`t we use the same ViewModel model on a client (in backbone.js or another framework)? Is there a way to transfer CS-coded model to JS-coded? Like in MVVM pattern, with knockout.js, when you can do like this: in SomeAction.cshtml: <div style="display: none;" id="view_model">@Json.Encode(Model)</div> after that in Javascript-code var ViewModel = ko.mapping.fromJSON($("#view_model").get(0).innerHTML); now you can extend your ViewModel with some actions, event handlers, etc: ko.utils.extend(ViewModel, { some_function: function () { //some code } }); So, we are not building the same view model on the client again, we are transferring existing view model from server. At least, data. But knockout.js is not suitable for me, you can`t build complex UI with it, it is just data-binding. I need something more structural, like backbone.js. The only way to build ViewModel for backbone.js I can see now is re-writing same ViewModel in JS from server with hands. Is there any ways to transfer it from server? To reuse the same viewmodel on server view and client view?

    Read the article

  • ZF Autoloader to load ancestor and requested class

    - by Pekka
    I am integrating Zend Framework into an existing application. I want to switch the application over to Zend's autoloading mechanism to replace dozens of include() statements. I have a specific requirement for the autoloading mechanism, though. Allow me to elaborate. The existing application uses a core library (independent from ZF), for example: /Core/Library/authentication.php /Core/Library/translation.php /Core/Library/messages.php this core library is to remain untouched at all times and serves a number of applications. The library contains classes like class ancestor_authentication { ... } class ancestor_translation { ... } class ancestor_messages { ... } in the application, there is also a Library directory: /App/Library/authentication.php /App/Library/translation.php /App/Library/messages.php these includes extend the ancestor classes and are the ones that actually get instantiated in the application. class authentication extends ancestor_authentication { } class translation extends ancestor_translation { } class messages extends ancestor_messages { } usually, these class definitions are empty. They simply extend their ancestors and provide the class name to instantiate. $authentication = new authentication(); The purpose of this solution is to be able to easily customize aspects of the application without having to patch the core libraries. Now, the autoloader I need would have to be aware of this structure. When an object of the class authentication is requested, the autoloader would have to: 1. load /Core/Library/authentication.php 2. load /App/Library/authentication.php My current approach would be creating a custom function, and binding that to Zend_Loader_Autoloader for a specific namespace prefix. Is there already a way to do this in Zend that I am overlooking? The accepted answer in this question kind of implies there is, but that may be just a bad choice of wording. Are there extensions to the Zend Autoloader that do this? Can you - I am new to ZF - think of an elegant way, conforming with the spirit of the framework, of extending the Autoloader with this functionality? I'm not necessary looking for a ready-made implementation, some pointers (This should be an extension to the xyz method that you would call like this...) would already be enough.

    Read the article

  • Can't iterate over nestled dict in django

    - by fredrik
    Hi, Im trying to iterate over a nestled dict list. The first level works fine. But the second level is treated like a string not dict. In my template I have this: {% for product in Products %} <li> <p>{{ product }}</p> {% for partType in product.parts %} <p>{{ partType }}</p> {% for part in partType %} <p>{{ part }}</p> {% endfor %} {% endfor %} </li> {% endfor %} It's the {{ part }} that just list 1 char at the time based on partType. And it seams that it's treated like a string. I can however via dot notation reach all dict but not with a for loop. The current output looks like this: Color C o l o r Style S ..... The Products object looks like this in the log: [{'product': <models.Products.Product object at 0x1076ac9d0>, 'parts': {u'Color': {'default': u'Red', 'optional': [u'Red', u'Blue']}, u'Style': {'default': u'Nice', 'optional': [u'Nice']}, u'Size': {'default': u'8', 'optional': [u'8', u'8.5']}}}] What I trying to do is to pair together a dict/list for a product from a number of different SQL queries. The web handler looks like this: typeData = Products.ProductPartTypes.all() productData = Products.Product.all() langCode = 'en' productList = [] for product in productData: typeDict = {} productDict = {} for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } productDict['product'] = product productDict['parts'] = typeDict defaultPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.defaultParts) optionalPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.optionalParts) for defaultPart in defaultPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = defaultPart.partLabelList, langCode = langCode).get() productDict['parts'][defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in optionalPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = optionalPart.partLabelList, langCode = langCode).get() productDict['parts'][optionalPart.type.typeId]['optional'].append(label.partLangLabel) productList.append(productDict) logging.info(productList) templateData = { 'Languages' : Settings.Languges.all().order('langCode'), 'ProductPartTypes' : typeData, 'Products' : productList } I've tried making the dict in a number of different ways. Like first making a list, then a dict, used tulpes anything I could think of. Any help is welcome! Bouns: If someone have an other approach to the SQL quires, that is more then welcome. I feel that it kinda stupid to run that amount of quires. What is happening that each product part has a different label base on langCode. ..fredrik

    Read the article

  • Problem with pointers and getstring function

    - by volting
    I am trying to write a function to get a string from the uart1. Its for an embedded system so I don't want to use malloc. The pointer that is passed to the getstring function seems to point to garbage after the gets_e_uart1() is called. I don't use pointers too often so I'm sure it is something really stupid and trivial that Im doing wrong. Regards, V int main() { char *ptr = 0; while(1) { gets_e_uart1(ptr, 100); puts_uart1(ptr); } return 0; }*end main*/ //------------------------------------------------------------------------- //gets a string and echos it //returns 0 if there is no error char getstring_e_uart1(char *stringPtr_, const int SIZE_) { char buffer_[SIZE_]; stringPtr_ = buffer_; int start_ = 0, end_ = SIZE_ - 1; char errorflag = 0; /*keep geting chars until newline char recieved*/ while((buffer_[start_++] = getchar_uart1())!= 0x0D) { putchar_uart1(buffer_[start_]);//echo it /*check for end of buffer wraparound if neccesary*/ if(start_ == end_) { start_ = 0; errorflag = 1; } } putchar_uart1('\n'); putchar_uart1('\r'); /*check for end of buffer wraparound if neccesary*/ if(start_ == end_) { buffer_[0] = '\0'; errorflag = 1; } else { buffer_[start_++] = '\0'; } return errorflag; } Update: I decided to go with approach of passing a pointer an array to the function. This works nicely, thanks to everyone for the informative answers. Updated Code: //------------------------------------------------------------------------- //argument 1 should be a pointer to an array, //and the second argument should be the size of the array //gets a string and echos it //returns 0 if there is no error char getstring_e_uart1(char *stringPtr_, const int SIZE_) { char *startPtr_ = stringPtr_; char *endPtr_ = startPtr_ + (SIZE_ - 1); char errorflag = 0; /*keep geting chars until newline char recieved*/ while((*stringPtr_ = getchar_uart1())!= 0x0D) { putchar_uart1(*stringPtr_);//echo it stringPtr_++; /*check for end of buffer wraparound if neccesary*/ if(stringPtr_ == endPtr_) { stringPtr_ = startPtr_; errorflag = 1; } } putchar_uart1('\n'); putchar_uart1('\r'); /*check for end of buffer wraparound if neccesary*/ if(stringPtr_ == endPtr_) { stringPtr_ = startPtr_; *stringPtr_ = '\0'; errorflag = 1; } else { *stringPtr_ = '\0'; } return errorflag; }

    Read the article

  • NSNotifications vs delegate for multiple instances of same protocol

    - by Brent Traut
    I could use some architectural advice. I've run into the following problem a few times now and I've never found a truly elegant way to solve it. The issue, described at the highest level possible:I have a parent class that would like to act as the delegate for multiple children (all using the same protocol), but when the children call methods on the parent, the parent no longer knows which child is making the call. I would like to use loose coupling (delegates/protocols or notifications) rather than direct calls. I don't need multiple handlers, so notifications seem like they might be overkill. To illustrate the problem, let me try a super-simplified example: I start with a parent view controller (and corresponding view). I create three child views and insert each of them into the parent view. I would like the parent view controller to be notified whenever the user touches one of the children. There are a few options to notify the parent: Define a protocol. The parent implements the protocol and sets itself as the delegate to each of the children. When the user touches a child view, its view controller calls its delegate (the parent). In this case, the parent is notified that a view is touched, but it doesn't know which one. Not good enough. Same as #1, but define the methods in the protocol to also pass some sort of identifier. When the child tells its delegate that it was touched, it also passes a pointer to itself. This way, the parent know exactly which view was touched. It just seems really strange for an object to pass a reference to itself. Use NSNotifications. The parent defines a separate method for each of the three children and then subscribes to the "viewWasTouched" notification for each of the three children as the notification sender. The children don't need to attach themselves to the user dictionary, but they do need to send the notification with a pointer to themselves as the scope. Same as #4, but rather than using separate methods, the parent could just use one with a switch case or other branching along with the notification's sender to determine which path to take. Create multiple man-in-the-middle classes that act as the delegates to the child views and then call methods on the parent either with a pointer to the child or with some other differentiating factor. This approach doesn't seem scalable. Are any of these approaches considered best practice? I can't say for sure, but it feels like I'm missing something more obvious/elegant.

    Read the article

  • Duplicate Blob field with foreach

    - by JGSilva
    I have some fields (blob) where I have uploaded some images. The images display correctly and I can open it without problem in Photoshop for example. I created a button where user can duplicate the product and everything works fine, but when it comes to duplicate the image entry I got some errors, like 1064 and others ones that I can't remember cause I am working 3 days inside this. Because de original product have 3 or more images I select then and gave an foreach. What I notice when a print de blob is that in the end it eats the next array, like if don't have an end. In other words, the next item got inside that utf-8 character in the print. That gave me some clue. The next approach was to save it in somewhere, and reupload it. The problem is that only the first one works. When I download the image saved, it opens normally so, it is not a saving in disk problem. When I gave a print in the $result, the same happens, is like the image is hungry and ate the next one. Here is the code. Notice = I created the [$count] to see if was not an rewrite in array error. Even tried to , in beging of the foreach, kind of clean the vars… $count=0; foreach ($original_image as $key => $val) { $count++; //$arquivo = ''; //$image = ''; //$file = ''; //$this->image = ''; //$return = ''; $arquivo[$count] = $val['pi_id'].'.'.$val['pi_type']; $image[$count] = $caminho_url.$arquivo[$count]; if (file_exists($image[$count])) { $this->image = Image::factory($image[$count]); $this->image->save($image[$count]); $file[$count]=mysql_real_escape_string(addslashes(fread(fopen($image[$count], "r"), filesize($image[$count])))); $return[$count] = Product::add_image($id_prod, $file[$count], $val['pi_type'],$val['pi_main']); }else { die('no'); } }

    Read the article

  • How to page multiple data sets in ASP.NET MVC

    - by REA_ANDREW
    On a single view I will have three sets of paged data. Which means for each model I will have The Objects The Page Index The Page Size My initial thought was for example: public class PagedModel<T> where T:class { public IList<T> Objects { get; set; } public int ModelPageIndex { get; set; } public int ModelPageSize { get; set; } } Then having a model which is to be supplied to the action as for example: public class TypesViewModel { public PagedModel<ObjectA> Types1 { get; set; } public PagedModel<ObjectB> Typed2 { get; set; } public PagedModel<ObjectC> Types3 { get; set; } } So if I then for example have the Index view inherit from the type: System.Web.Mvc.ViewPage<uk.co.andrewrea.forum.Web.Models.TypesViewModel> Now my initial aciton method for the index is simply: public ActionResult Index() { var forDisplayPurposes = new TypesViewModel(); return View(forDisplayPurposes); } If I then want to page, it is here where I am struggling to decide which action to take. Lets say that I select the next page of the Types2 PageModel. What should the action look like for this in order to return the new view showing the second page of the Types2 PageModel I was thinking possibly to duplicate the action but use it with POST [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(TypesViewModel model) { return View(model); } Is this a good way to approach it. I understand there is always Session, but I was just wondering how such a thing is achieved currently out there. If any best methods have been mutually accepted and things. So simply, one page with multiple paged models. How to persist the data for each using a wrapper model. Which way should you pass in the model and which way should you page the data, i.e. Form Post Lastly, I have seen the routes take this into account i.e. {controller}/{action}/{id}/{pageindex}/{pagesize} but this only accounts for one model and I do not really wwant to repeat the pagesize and pageindex values for the number of models I have inside the wrapper model. Thanks for your time!! Andrew

    Read the article

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • Is it any loose coupling mechanism in Objective-C + Cocoa like C# delegates or C++Qt signals+slots?

    - by Eye of Hell
    Hello. For a large programs, the standard way to chalenge a complexity is to divide a program code into small objects. Most of the actual programming languages offer this functionality via classes, so is Objective-C. But after source code is separated into small object, the second challenge is to somehow connect them with each over. Standard approaches, supported by most languages are compositon (one object is a member field of another), inheritance, templates (generics) and callbacks. More cryptic techniques include method-level delagates (C#) and signals+slots (C++Qt). I like the delegates / signals idea, since while connecting two objects i can connect individual methods with each over, without objects knowing anything of each over. For C#, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); object1.SomethingHappened += object2.HandleSomething; In this code, is object1 calls it's SomethingHappened delegate (like a normal method call) the HandleSomething method of object2 will be called. For C++Qt, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); connect( object1, SIGNAL(SomethingHappened()), object2, SLOT(HandleSomething()) ); The result will be exactly the same. This technique has some advantages and disadvantages, but generally i like it more than interfaces since if program code base grows i can change connections and add new ones without creating tons of interfaces. After examination of Objective-C i havn't found any way to use this technique i like :(. It seems that Objective-C supports message passing perfectly well, but it requres for object1 to have a pointer to object2 in order to pass it a message. If some object needs to be connected to lots of other objects, in Objective-C i will be forced to give him pointers to each of the objects it must be connected. So, the question :). Is it any approach in Objective-C programming that will closely resemble delegate / signal+slot types of connection, not a 'give first object an entire pointer to second object so it can pass a message to it'. Method-level connections are a bit more preferable to me than object-level connection ^_^.

    Read the article

  • Publishing/subscribing multiple subsets of the same server collection

    - by matb33
    How does one go about publishing different subsets (or "views") of a single collection on the server as multiple collections on the client? Here is some pseudo-code to help illustrate my question: items collection on the server Assume that I have an items collection on the server with millions of records. Let's also assume that: 50 records have the enabled property set to true, and; 100 records have the processed property set to true. All others are set to false. items: { "_id": "uniqueid1", "title": "item #1", "enabled": false, "processed": false }, { "_id": "uniqueid2", "title": "item #2", "enabled": false, "processed": true }, ... { "_id": "uniqueid458734958", "title": "item #458734958", "enabled": true, "processed": true } Server code Let's publish two "views" of the same server collection. One will send down a cursor with 50 records, and the other will send down a cursor with 100 records. There are over 458 million records in this fictitious server-side database, and the client does not need to know about all of those (in fact, sending them all down would probably take several hours in this example): var Items = new Meteor.Collection("items"); Meteor.publish("enabled_items", function () { // Only 50 "Items" have enabled set to true return Items.find({enabled: true}); }); Meteor.publish("processed_items", function () { // Only 100 "Items" have processed set to true return Items.find({processed: true}); }); Client code In order to support the latency compensation technique, we are forced to declare a single collection Items on the client. It should become apparent where the flaw is: how does one differentiate between Items for enabled_items and Items for processed_items? var Items = new Meteor.Collection("items"); Meteor.subscribe("enabled_items", function () { // This will output 50, fine console.log(Items.find().count()); }); Meteor.subscribe("processed_items", function () { // This will also output 50, since we have no choice but to use // the same "Items" collection. console.log(Items.find().count()); }); My current solution involves monkey-patching _publishCursor to allow the subscription name to be used instead of the collection name. But that won't do any latency compensation. Every write has to round-trip to the server: // On the client: var EnabledItems = new Meteor.Collection("enabled_items"); var ProcessedItems = new Meteor.Collection("processed_items"); With the monkey-patch in place, this will work. But go into Offline mode and changes won't appear on the client right away -- we'll need to be connected to the server to see changes. What's the correct approach?

    Read the article

  • Generate navigation from a multi-dimensional array

    - by rg88
    The question: How do I generate navigation, allowing for applying different classes to different sub-items, from a multi-dimensional array? Here is how I was doing it before I had any need for multi-level navigation: Home Pics About and was generated by calling nav(): function nav(){ $links = array( "Home" => "home.php", "Pics" => "pics.php", "About" => "about.php" ); $base = basename($_SERVER['PHP_SELF']); foreach($nav as $k => $v){ echo buildLinks($k, $v, $base); } } Here is buildLinks(): function buildLinks($name, $page, $selected){ if($selected == $page){ $theLink = "<li class=\"selected\"><a href=\"$page\">$name</a></li>\n"; } else { $thelink = "<li><a href=\"$page\">$name</a></li>\n"; } return $thelink; } My question, again: how would I achieve the following nav (and notice that the visible sub navigation elements are only present when on that specific page): Home something1 something2 Pics About and... Home Pics people places About What I've tried From looking at it it would seem that some iterator in the SPL would be a good fit for this but I'm not sure how to approach this. I have played around with RecursiveIteratorIterator but I'm not sure how to apply a different style to only the sub menu items and also how to only show these items if you are on the correct page. I built this array to test with but don't know how to work with the submenu1 items individually: $nav = array( array( "Home" => "home.php", "submenu1" => array( "something1"=>"something1.php", "something2" => "something2.php") ), array("Pics" => "pics.php"), array("About" => "about.php") ); The following will print out the lot in order but how do I apply, say a class name to the submenu1 items or only show them when the person is on, say, the "Home" page? $iterator = new RecursiveIteratorIterator(new RecursiveArrayIterator($nav)); foreach($iterator as $key=>$value) { echo $key.' -- '.$value.'<br />'; }

    Read the article

  • I have to do two seemingly mutually exclusive things on leaving an asp:textbox. Please help me get

    - by aape
    This project has gone from being a simple '99 Ford F-150 to the Homer. I've got controls with a gridview with textboxes for data entry. All the user controls on the pages are in AJAX updatepanels. User types in a database column or budget entity or some other financial thing they want to include in the report. The textboxes in the gridview have autopostback = true set. overly long background info When the user leaves the textbox, during the postback (triggered by onTextChanged) I do some validation back on the server on their entry - regexs, do they have rights to that column, is that column locked, etc. If it fails, I put a error message next to the textbox. If it passes, I wipe out any title or error that used to be next to the code. Focus is getting lost from the postback if they're tabbing out of the box, rather than going to the next textbox in the gridview. So to fix that I need, if their leaving the tb via the tab key, to also figure out what textbox or gridviewrow they're on, if they're not on the last row, and after the validation and labeling, put the focus on the textbox in the next row. I can't figure out how, in ontextchanged, to find what caused me to leave the textbox, so I'm thinking use javascript onkeyup to test the key pressed and then find the next box etc, but the ontextchanged fires first and then the js never does, and also, since the control is all AJAXed, the javascript can't find the textboxes because when you enter the page everything is collapsed (the requirements people loooove to collapse and expand things), and so when it's expanded, all the 'new' textboxes are up in the viewstate stuff in the page source, and not down where javascript can see them. The questions So I'm wondering if I can have an onblur in the javascript that can trigger a postback where I can do my validation and such, and either 1) include the keypressed or pick it out of sender in the event or 2) followup the onblur with onkeyup and somehow figure out what textbox is next on the grid and throw focus there. Or, is there another .NET based approach that could work for this? In terms of tearing the whole thing down and starting from scratch, I couldn't sell that to the bosses, I'm past the point of no return as far as that goes.

    Read the article

  • Visual studio 2010 colourizers, intellisense and the rest. Where to start!!

    - by Owen
    Ok, before I begin I realize that there is a lot of documentation on this subject but I have thus far failed to get even basic colourization working for VS2010. My goal is to simply get to a point where I can open a document and everything is coloured red, from here I can implement the relevant parsing logic. Here's what I have tried/found: 1) Downloaded all the relevent SDK's and such- Found the ook sample (http://code.msdn.microsoft.com/ookLanguage) - didn't build, didn't work. 2) Knowing almost nothing about MEF read through "Implementing a Language Service By Using the Managed Package Framework" - http://msdn.microsoft.com/en-us/library/bb166533(v=VS.100).aspx This was pretty much a copy and paste of all the basic stuff here, and also updating some references which were out of date with the sample see: http://social.msdn.microsoft.com/Forums/en-US/vsx/thread/a310fe67-afd2-4592-b295-3fc86fec7996 Now, I have got to a point where when running the package MEF appears to have hooked up correctly (I know this because with the debugger open I can see that the packages initialize and FDoIdle methods are being hit). When I open a file of the extension I have registered with the ProvideLanguageExtensionAttribute everything dies as if in an endless loop, yet no debug symbols hit (though they are loaded). Looking at the ook sample and the MEF examples they seem to be totally different approaches to the same problem. In the ook sample there are notions of Clasifications and Completion controllers which aren't mentioned in the MEF example. Also, they don't seem to create a Package or Language service, so I have no idea how it should work? With the MEF example, my assumption is that I need to hook into the "IScanner.ScanTokenAndProvideInfoAboutIt" to provide syntax highlighting? Which would be fine if I could ever hit this method. So my first question I guess is which approach should I be taking here? Or do they both somehow tie together? My second questions is, where can I find a basic fully working project that implements bog standard basic syntax highlighting and intellisense or VS2010? Thirdly, in the MEF example when I created a Package there were a bunch of test projects created for me. I appears that the integration tests launch the VS2010 test rig somehow, but the test fails. It would be good to write my service with tests but I have no idea what/how I can test each interaction so any references to testing Language services would be helpful. Finally, please throw any resource/book links my way that I may find useful. Cheers, Chris. N.B. Sorry I realize this is part question part rant, but I have never been so confused.

    Read the article

  • Multiple column subselect in mysql 5 (5.1.42)

    - by rubber boots
    This one seems to be a simple problem, but I can't make it work in a single select or nested select. Retrieve the authors and (if any) advisers of a paper (article) into one row. I order to explain the problem, here are the two data tables (pseudo) papers (id, title, c_year) persons (id, firstname, lastname) plus a link table w/one extra attribute (pseudo): paper_person_roles( paper_id person_id act_role ENUM ('AUTHOR', 'ADVISER') ) This is basically a list of written papers (table: papers) and a list of staff and/or students (table: persons) An article my have (1,N) authors. An article may have (0,N) advisers. A person can be in 'AUTHOR' or 'ADVISER' role (but not at the same time). The application eventually puts out table rows containing the following entries: TH: || Paper_ID | Author(s) | Title | Adviser(s) | TD: || 21334 |John Doe, Jeff Tucker|Why the moon looks yellow|Brown, Rayleigh| ... My first approach was like: select/extract a full list of articles into the application, eg.SELECT q.id, q.title FROM papers AS q ORDER BY q.c_year and save the results of the query into an array (in the application). After this step, loop over the array of the returned information and retrieve authors and advisers (if any), via prepared statement (? is the paper's id) from the link table like:APPLICATION_LOOP(paper_ids in array) SELECT p.lastname, p.firstname, r.act_role FROM persons AS p, paper_person_roles AS r WHERE p.id=r.person_id AND r.paper_id = ? # The application does further processing from here (pseudo): foreach record from resulting records if record.act_role eq 'AUTHOR' then join to author_column if record.act_role eq 'ADVISER' then join to avdiser_column end print id, author_column, title, adviser_column APPLICATION_LOOP This works so far and gives the desired output. Would it make sense to put the computation back into the DB? I'm not very proficient in nontrivial SQL and can't find a solution with a single (combined or nested) select call. I tried sth. like SELECT q.title (CONCAT_WS(' ', (SELECT p.firstname, p.lastname AS aunames FROM persons AS p, paper_person_roles AS r WHERE q.id=r.paper_id AND r.act_role='AUTHOR') ) ) AS aulist FROM papers AS q, persons AS p, paper_person_roles AS r in several variations, but no luck ... Maybe there is some chance? Thanks in advance r.b.

    Read the article

  • What are the rules governing how a bind variable can be used in Postgres and where is this defined?

    - by Craig Miles
    I can have a table and function defined as: CREATE TABLE mytable ( mycol integer ); INSERT INTO mytable VALUES (1); CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol = l_myvar; RETURN l_myrow; END; $$ LANGUAGE plpgsql; In this case l_myvar acts as a bind variable for the value passed when I call: SELECT * FROM myfunction(1); and returns the row where mycol = 1 If I redefine the function as: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (l_myvar); RETURN l_myrow; END; $$ LANGUAGE plpgsql; SELECT * FROM myfunction(1); still returns the row where mycol = 1 However, if I now change the function definition to allow me to pass an integer array and try to this array in the IN clause, I get an error: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer[]) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (array_to_string(l_myvar, ',')); RETURN l_myrow; END; $$ LANGUAGE plpgsql; Analysis reveals that although: SELECT array_to_string(ARRAY[1, 2], ','); returns 1,2 as expected SELECT * FROM myfunction(ARRAY[1, 2]); returns the error operator does not exist: integer = text at the line: WHERE mycol IN (array_to_string(l_myvar, ',')); If I execute: SELECT * FROM mytable WHERE mycol IN (1,2); I get the expected result. Given that array_to_string(l_myvar, ',') evaluates to 1,2 as shown, why arent these statements equivalent. From the error message it is something to do with datatypes, but doesnt the IN(variable) construct appear to be behaving differently from the = variable construct? What are the rules here? I know that I could build a statement to EXECUTE, treating everything as a string, to achieve what I want to do, so I am not looking for that as a solution. I do want to understand though what is going on in this example. Is there a modification to this approach to make it work, the particular example being to pass in an array of values to build a dynamic IN clause without resorting to EXECUTE? Thanks in advance Craig

    Read the article

  • Can I use the [] operator in C++ to create virtual arrays

    - by Shane MacLaughlin
    I have a large code base, originally C ported to C++ many years ago, that is operating on a number of large arrays of spatial data. These arrays contain structs representing point and triangle entities that represent surface models. I need to refactor the code such that the specific way these entities are stored internally varies for specific scenarios. For example if the points lie on a regular flat grid, I don't need to store the X and Y coordinates, as they can be calculated on the fly, as can the triangles. Similarly, I want to take advantage of out of core tools such as STXXL for storage. The simplest way of doing this is replacing array access with put and get type functions, e.g. point[i].x = XV; becomes Point p = GetPoint(i); p.x = XV; PutPoint(i,p); As you can imagine, this is a very tedious refactor on a large code base, prone to all sorts of errors en route. What I'd like to do is write a class that mimics the array by overloading the [] operator. As the arrays already live on the heap, and move around with reallocs, the code already assumes that references into the array such as point *p = point + i; may not be used. Is this class feasible to write? For example writing the methods below in terms of the [] operator; void MyClass::PutPoint(int Index, Point p) { if (m_StorageStrategy == RegularGrid) { int xoffs,yoffs; ComputeGridFromIndex(Index,xoffs,yoffs); StoreGridPoint(xoffs,yoffs,p.z); } else m_PointArray[Index] = p; } } Point MyClass::GetPoint(int Index) { if (m_StorageStrategy == RegularGrid) { int xoffs,yoffs; ComputeGridFromIndex(Index,xoffs,yoffs); return GetGridPoint(xoffs,yoffs); // GetGridPoint returns Point } else return m_PointArray[Index]; } } My concern is that all the array classes I've seen tend to pass by reference, whereas I think I'll have to pass structs by value. I think it should work put other than performance, can anyone see any major pitfalls with this approach. n.b. the reason I have to pass by value is to get point[a].z = point[b].z + point[c].z to work correctly where the underlying storage type varies.

    Read the article

  • Visual Studio 2010 and Test Driven Development

    - by devoured elysium
    I'm making my first steps in Test Driven Development with Visual Studio. I have some questions regarding how to implement generic classes with VS 2010. First, let's say I want to implement my own version of an ArrayList. I start by creating the following test (I'm using in this case MSTest): [TestMethod] public void Add_10_Items_Remove_10_Items_Check_Size_Is_Zero() { var myArrayList = new MyArrayList<int>(); for (int i = 0; i < 10; ++i) { myArrayList.Add(i); } for (int i = 0; i < 10; ++i) { myArrayList.RemoveAt(0); } int expected = 0; int actual = myArrayList.Size; Assert.AreEqual(expected, actual); } I'm using VS 2010 ability to hit ctrl + . and have it implement classes/methods on the go. I have been getting some trouble when implementing generic classes. For example, when I define an .Add(10) method, VS doesn't know if I intend a generic method(as the class is generic) or an Add(int number) method. Is there any way to differentiate this? The same can happen with return types. Let's assume I'm implementing a MyStack stack and I want to test if after I push and element and pop it, the stack is still empty. We all know pop should return something, but usually, the code of this test shouldn't care for it. Visual Studio would then think that pop is a void method, which in fact is not what one would want. How to deal with this? For each method, should I start by making tests that are "very specific" such as is obvious the method should return something so I don't get this kind of ambiguity? Even if not using the result, should I have something like int popValue = myStack.Pop() ? How should I do tests to generic classes? Only test with one generic kind of type? I have been using ints, as they are easy to use, but should I also test with different kinds of objects? How do you usually approach this? I see there is a popular tool called TestDriven for .NET. With VS 2010 release, is it still useful, or a lot of its features are now part of VS 2010, rendering it kinda useless? Thanks

    Read the article

  • How to query across many-to-many association in NHibernate?

    - by Splash
    I have two entities, Post and Tag. The Post entity has a collection of Tags which represents a many-to-many join between the two (that is, each post can have any number of tags and each tag can be associated with any number of posts). I am trying to retrieve all Posts which have a given tag. However, I seem to be unable to get this query right. I essentially want something which means the same as the following pseudo-HQL: from Posts p where p.Tags contains (from Tags t where t.Name = :tagName) order by p.DateTime The only thing I've found which even approaches this is a post by Ayende. However, his approach requires the entity on the other side (in my case, Tag) to have a collection showing the other end of the many-to-many. I don't have this and don't really wish to have it. I find it hard to believe this can't be done. What am I missing? My entities & mappings look like this (simplified): public class Post { public virtual int Id { get; set; } public virtual string Title { get; set; } private IList<Tag> tags = new List<Tag>(); public virtual IEnumerable<Tag> Tags { get { return tags; } } public virtual void AddTag(Tag tag) { this.tags.Add(tag); } } public class PostMap : ClassMap<Post> { public PostMap() { Id(x => x.Id).GeneratedBy.HiLo("99"); Map(x => x.Title); HasManyToMany(x => x.Tags); } } // ---- public class Tag { public virtual int Id { get; set; } public virtual string Name { get; set; } } public class TagMap : ClassMap<Tag> { public TagMap () { Id(x => x.Id).GeneratedBy.HiLo("99"); Map(x => x.Name).Unique(); } }

    Read the article

  • DataGridView live display of datatable using virtual mode

    - by Chris
    I have a DataGridView that will display records (log entries) from a database. The amount of records that can exist at a time is very large. I would like to use the virtual mode feature of the DataGridView to display a page of data, and to minimize the amount of data that has to be transferred across a network at a given time. Polling for data is out of the question. There will be several clients running at a time, all of which are on the same network and viewing the records. If they all poll for data, the network will run very slowly. The data is read-only to the user; they won't be able to edit any of it, just view it. I need to know when updates occur in the database, and I need to update the screen with those updates accordingly using virtual mode. If a page of data a user is viewing contains data that has change, he/she will see those updates on that page. If updates were made to data in the database, but not in the data the user is viewing, then not much really changes on the user screen (Maybe just the scroll bar if records were added or removed). My current approach is using SQL server change tracking with the sync framework. Each client has a local SQL Server CE instance and database file that is kept in sync with the main database server. I use the information from the synchronization event to see if any changes were made to the main database and were sync'ed to the client. I need to use the DataGridView virtual mode here because I can't have thousands of records loaded into the DataGridView at once, otherwise memory usage goes through the roof. The main challenge right now is knowing how to use virtual mode to provide a seamless experience to the user by allowing them to scroll up and down through the records, and also have records update on the fly without interfering with the user inappropriately. Has anybody dealt with this issue before, and if so, where I can see how they did it? I've gone through some of the MSDN documentation and examples on virtual mode. So far, I haven't found documentation and/or examples on their site that explains how to do what I am trying to accomplish.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >