Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 818/998 | < Previous Page | 814 815 816 817 818 819 820 821 822 823 824 825  | Next Page >

  • C#: How to run NUnit from my code

    - by Flavio
    Hello, I'd like to use NUnit to run unit tests in my plug-in, but it needs to be run in the context of my application. To solve this, I was trying to develop a plug-in that runs NUnit, which in turn will execute my tests in the application's context. I didn't find a specific documentation on this subject so I dug a piece of information here and there and I came out with the following piece of code (which is similar to one I found here in StackOverflow): SimpleTestRunner runner = new SimpleTestRunner(); TestPackage package = new TestPackage( "Test" ); string loc = Assembly.GetExecutingAssembly().Location package.Assemblies.Add( loc ); if( runner.Load(package) ) { TestResult result = runner.Run( new NullListener() ); } The result variable says "has no TestFixture" although I know for sure it is there. In fact my test file contains two test. Using another approach I found, which is summarized by the following code: TestSuiteBuilder builder = new TestSuiteBuilder(); TestSuite testSuite = builder.Build( package ); // Run tests TestResult result = testSuite.Run( new NullListener(), NUnit.Core.TestFilter.Empty ); I saw nunit data structures with only 1 test and I had the same error. For sake of completeness, I am using the latest version of nunit, which is 2.5.5.10112. Does anyone know what I'm missing? A sample code would be appreciated. thanks

    Read the article

  • Hotkeys in webapps

    - by Johoo
    When creating webapps, is there any guidelines on which keys you can use for your own hotkeys without overriding too many of the browsers default hotkeys. For example i might want to have a custom copy command for copying entire sets of data that only makes sense for my program instead of just text. The logical combination for this would be ctrl+c but that would destroy the default copy hotkey for normal text. One solution i was thinking about is to only catch the hotkey when it "makes sense" but when you use some advanced custom selection it might be hard to differentiate if your data is focused, if text is selected or both. Right now i am only using single keys as the hotkey, so just 'c' for the example above and this seems to be what most other sites are doing too. The problem is that if you have text input this doesn't work so good. Is this the best solution? To clarify I'm talking about advanced webapps that behave more like normal programs and not just some website presenting information(even though i think these guidlines would be valid for both cases). So for the copy example it might not be a big deal if you can't copy the text in the menu but when ctrl+tab, alt+d or ctrl+e doesn't work i would be really pissed, cough flash cough.

    Read the article

  • git doesn't show where code was removed.

    - by Andrew Myers
    So I was tasked at replacing some dummy code that our project requires for historical compatibility reasons but has mysteriously dropped out sometime since the last release. Since disappearing code makes me nervous about what else might have gone missing but un-noticed I've been digging through the logs trying to find in what commit this handful of lines was removed. I've tried a number of things including "git log -S'add-visit-resource-pcf'", git blame, and even git bisect with a script that simply checks for the existence of the line but have been unable to pinpoint exactly where these lines were removed. I find this very perplexing, particularly since the last log entry (obtained by the above command) before my re-introduction of this code was someone else adding the code as well. commit 0b0556fa87ff80d0ffcc2b451cca1581289bbc3c Author: Andrew Date: Thu May 13 10:55:32 2010 -0400 Re-introduced add-visit-resource-pcf, see PR-65034. diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index f8e692d..a6f8d38 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -115,6 +115,7 @@ #:add-to-current-resource-pcf #:add-user-package-nickname #:add-value-criteria + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types commit 9fb10e25572c537076284a248be1fbf757c1a6e1 Author: Bob Date: Sun Jan 17 18:35:16 2010 -0500 update-defpackage for Spike 33.1 Delivery diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index 983666d..47f1a9a 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -118,6 +118,7 @@ #:add-user-package-nickname #:add-value-criteria #:add-vars-from-proposal + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types This is for one of our package definition files, but the relevant source file reflects something similar. Does anyone know what could be going on here and how I could find the information I want? It's not really that important but this kind of things makes me a bit nervous.

    Read the article

  • Obtaining Current Fiscal Year from Hierarchy with MDX

    - by Robert Iver
    I'm building a report in Reporting Services 2005 based on a SSAS 2005 cube. The basic idea of the report is that they want to see sales for this fiscal year to date vs. last year sales year to date. It sounds simple, but being that this is my first "real" report based on SSAS, I'm having a hell of a time. First, how would one calculate the current fiscal year, quarter, or month. I have a Fiscal Date Hierarchy with all that information in it, but I can't figure out how to say: "Based on today's date, find the current fiscal year, quarter, and month." My second, but slightly smaller problem, is getting last years sales vs. this years sales. I have seen MANY examples on how to do this, but they all assume that you select the date manually. Since this is a report and will run pretty much on it's own, I need a way to insert the "current" fiscal year, quarter, and month into the PERIODSTODATE or PARALLELPERIOD functions to get what I want. So, I'm begging for your help on this one. I have to have this report done by tomorrow monring and I'm beginning to freak out a bit. Thanks in advance.

    Read the article

  • Ruby and duck typing: design by contract impossible?

    - by davetron5000
    Method signature in Java: public List<String> getFilesIn(List<File> directories) similar one in ruby def get_files_in(directories) In the case of Java, the type system gives me information about what the method expects and delivers. In Ruby's case, I have no clue what I'm supposed to pass in, or what I'll expect to receive. In Java, the object must formally implement the interface. In Ruby, the object being passed in must respond to whatever methods are called in the method defined here. This seems highly problematic: Even with 100% accurate, up-to-date documentation, the Ruby code has to essentially expose its implementation, breaking encapsulation. "OO purity" aside, this would seem to be a maintenance nightmare. The Ruby code gives me no clue what's being returned; I would have to essentially experiment, or read the code to find out what methods the returned object would respond to. Not looking to debate static typing vs duck typing, but looking to understand how you maintain a production system where you have almost no ability to design by contract. Update No one has really addressed the exposure of a method's internal implementation via documentation that this approach requires. Since there are no interfaces, if I'm not expecting a particular type, don't I have to itemize every method I might call so that the caller knows what can be passed in? Or is this just an edge case that doesn't really come up?

    Read the article

  • Mac CoreLocation Services does not ask for permissions

    - by Ryan Nichols
    I'm writing a Mac App that needs to use CoreLocation services. The code and location works fine, as long as I manually authenticate the service inside the security preference pane. However the framework is not automatically popping up with a permission dialog. The documentation states: Important The user has the option of denying an application’s access to the location service data. During its initial uses by an application, the Core Location framework prompts the user to confirm that using the location service is acceptable. If the user denies the request, the CLLocationManager object reports an appropriate error to its delegate during future requests. I do get an error to my delegate, and the value of +locationServicesEnabled is correct on CLLocationManager. The only part missing is the prompt to the user about permissions. This occurs on my development MPB and a friends MBP. Neither of us can figure out whats wrong. Has anyone run into this? Relevant code: _locationManager = [CLLocationManager new]; [_locationManager setDelegate:self]; [_locationManager setDesiredAccuracy:kCLLocationAccuracyKilometer]; ... [_locationManager startUpdatingLocation]; UPDATE: Answer It seems there is a problem with Sandboxing in which the CoreLocation framework is not allowed to talk to com.apple.CoreLocation.agent. I suspect this agent is responsible for prompting the user for permissions. If you add the Location Services Entitlement (com.apple.security.personal-information.location) it only gives your app the ability to use the CL framework. However you also need access to the CoreLocation agent to ask the user for permissions. You can give your app access by adding the entitlement 'com.apple.security.temporary-exception.mach-lookup.global-name' with a value of 'com.apple.CoreLocation.agent'. Users will be prompted for access automatically like you would expect. I've filed a bug to apple on this already.

    Read the article

  • NHibernate Child items query using Parent Id

    - by thorkia
    So I have a set up similar to this questions: Parent Child Setup Everything works great when saving the parent and the children. However, I seem to have a problem when selecting the children. I can't seem to get all the children with a specific parent. This fails with: NHibernate.QueryException: could not resolve property: ParentEntity_id of: Test.Data.ChildEntity Here is my code: public IEnumerable<ChildEntity> GetByParent(ParentEntity parent) { using (ISession session = OrmHelper.OpenSession()) { return session.CreateCriteria<ChildEntity>().Add(Restrictions.Eq("ParentEntity_id ", parent.Id)).List<ChildEntity>(); } } Any help in building a proper function to get all the items would be appreciated. Oh, I am using Fluent NHibernate to construct the mappings - version 1 RTM and NHibernate 2.1.2 GA If you need more information, let me know. As per you request, my fluent mappings: public ParentEntityMap() { Id(x => x.Id); Map(x => x.Name); Map(x => x.Code).UniqueKey("ukCode"); HasMany(x => x.ChildEntity).LazyLoad() .Inverse().Cascade.SaveUpdate(); } public ChildEntityMap() { Id(x => x.Id); Map(x => x.Amount); Map(x => x.LogTime); References(x => x.ParentEntity); } That maps to the following 2 tables: CREATE TABLE "ParentEntity" ( Id integer, Name TEXT, Code TEXT, primary key (Id), unique (Code) ) CREATE TABLE "ChildEntity" ( Id integer, Amount NUMERIC, LogTime DATETIME, ParentEntity_id INTEGER, primary key (Id) ) The data store in SQLite.

    Read the article

  • How would I structure the loop to go through inputs?

    - by dmanexe
    I am attempting to make a loop that will go through an array structured like this: $input[n][checked] $input[n][input] The 2nd input acts as a price multiplier, but doesn't have to exist (field can be blank). I don't think a foreach loop is right because I don't think it handles the inputs from the form in the correct dimensional array order to keep the information together. I have inputs on a form that look like this: <input type="checkbox" name="measure[<?php echo $item->id; ?>][checked]" value="<?php echo $item->id; ?>"> <input class="item_mult" type="text" name="measure[<?php echo $item->id; ?>][input]" /> I am attempting to make the loop go through and act as a multiplier on the input relative to its sibling field. (i.e. input[1][input] would be an integer that I want to retrieve later, grouped with input[1][checked]) <? $field = $this->input->post('measure',true); $totals = array(); foreach($field as $value): if ($value['input'] == TRUE) { $query = $this->db->get_where('items', array('id' => $value['input']))->row(); $totals[] = $query->price; ?> <p><?=$query->name?> - <?=money_format('%(#10n', $query->price)?></p> <?php } else { } endforeach; ?> And finally, the last code to array_sum and print the grand total: <? $grand_total = array_sum($totals); ?> <p><?=money_format('%(#10n', $grand_total)?></p> Eventually, I will need to store these records in a database, so I am sending complete item IDs through, etc.

    Read the article

  • VEMap and a GeoRSS feed(hosted separately)

    - by Alexis Abril
    The scenario is as follows: A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it. A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object). Now, VEMap can accept an input feed in this format via the following: var layer = new VEShapeLayer(); var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer); map.ImportShapeLayerData(veLayerSpec, onComplete, true); onComplete is a callback function I'm using to replace the default pin graphic with something custom. The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format. var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer); When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error. The order of operations is currently: remote feed - local handler - VEMap import If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.

    Read the article

  • Facebook Connect for simple authentication?

    - by Starnzy
    Hi I have an ASP.net website which I want to introduce 'Facebook Connect' functionality into, purely for account login/creation purposes. I want a user to be able to click the 'Login using Facebook' type button, and to then log that user into my website based on a userid lookup from the Facebook response. I have a couple of questions surrounding this: Presumeably I can do all of this using the Facebook API - without the need for an actual pretty public facing 'application' on Facebook? I simply want to utilise the Facebook API for authenticating an account. I'm not interested in creating some app for doing something 'within' facebook itself. I have located some code snippets online and tried using the Facebook Developer Toolkit, calling the getInfo method, and whilst it does come back to my website with a uid, none of the other user information is present within the response, like Email, Name etc. The uid is the only populated field in the response. Here is the code I use: if (ConnectAuthentication.isConnected()) { API api = new API(); api.ApplicationKey = ConnectAuthentication.ApiKey; api.SessionKey = ConnectAuthentication.SessionKey; api.Secret = ConnectAuthentication.SecretKey; api.uid = ConnectAuthentication.UserID; //Display user data captured from the Facebook API. facebook.Schema.user facebookUser = null; try { facebookUser = api.users.getInfo(); User user = new User(); user.FacebookUser = facebookUser; user.IsFacebookUser = true; return user; } catch { return null; } } else { return null; } Can anyone please help with either/both of these queries? Thanks in advance...

    Read the article

  • How to port an Ajax CMS based on metadata in Asp.Net MVC?

    - by Maushu
    I'm maintaining a CMS where I have this feeling it was made in the age of dinosaurs (Asp.net 1.0?) and decided to upgrade it with Asp.Net MVC and jQuery. But I have some problems regarding the design/specifications of the CMS which I cannot change. The CMS The CMS uses JavaScript. Alot. As in "I don't load pages, I request new pages using Ajax and render the information using javascript" alot. Not to mention the animations, the weird horizontal apresentation of structures... anyways, besides the first page (that is the login page) every other "page" is just data requested from a WebService that comes with the website. Would MVC have any problems with this design? The Database The database is in a SQL Server 2k8 and, like the CMS, this part is also... interesting. Basically, the user can create data structures using metadata (and saved on the Structure table). These structures are saved on tables that are created (and regenerated when changed) at runtime using said metadata. I don't know how I would implement this part in MVC. The question is, can and should I convert this project to MVC? Any tips regarding the metadata and overuse of ajax?

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Is there a way to make changes to toggles in my .emacs file apply without re-starting Emacs?

    - by Vivi
    I want to be able to make the changes to my .emacs file without having to reload Emacs. I found three questions which sort of answer what I am asking (you can find them here, here and here), but the problem is that the change I have just made is to a toggle, and as the comments to two of the answers (a1, a2) to those questions explain, the solutions given there (such as M-x reload-file or M-x eval-buffer) don't apply to toggles. I imagine there is a way of toggling the variable again with a command, but if there is a way to reload the whole .emacs and have the all the toggles re-evaluated without having to specify them, I would prefer. In any case, I would also appreciate if someone told me how to toggle the value of a variable so that if I just changed one toggle I can do it with a command rather than re-start Emacs just for that (I am new to Emacs). I don't know how useful this information is, but the change I applied was the following (which I got from this answer to another question): (setq skeleton-pair t) (setq skeleton-pair-on-word t) (global-set-key (kbd "[") 'skeleton-pair-insert-maybe) (global-set-key (kbd "(") 'skeleton-pair-insert-maybe) (global-set-key (kbd "{") 'skeleton-pair-insert-maybe) (global-set-key (kbd "<") 'skeleton-pair-insert-maybe) Edit: I included the above in .emacs and reloaded Emacs, so that the changes took effect. Then I commented all of it out and tried M-x load-file. This doesn't work. The suggestion below (C-x C-e by PP works if I am using it to evaluate the toggle first time, but not when I want to undo it). I would like something that would evaluate the commenting out, if such thing exists... Thanks :)

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • How to set UCS2 in numpy?

    - by mindcorrosive
    I'm trying to build numpy 1.2.1 as a module for a third-party python interpreter (custom-built, py2.4 linux x86_64) so that I can make calls to numpy from within it. Let's call this one interpreter A. The thing is, the system-wide python interpreter (also py2.4, let's call it B) from the vendor is built with --enable-unicode=ucs4, while the custom one is with UCS2. Needless to say, when I try to build a module with B, I get an error when I try to import numpy in A -- it complains about undefined symbol _PyUnicodeUCS4_IsWhiteSpace. I've searched around and apparently there's no way around this but to compile a custom Python interpreter -- which I did (let's call it interpreter C), properly specifying the unicode string length (verifiable through sys.maxunicode). I managed to build numpy with C as well, surprisingly enough, but still the problem persists when I try to import it in interpreter C. Previously, when I built numpy using B, there were no problems when importing it in B, but A would complain. Perhaps there's an option when building numpy to specify the length of Unicode strings to be used, as when configuring Python builds? Or am I doing something else wrong? A few notes: Upgrading to newer versions of python and/or numpy is not an option - interpreter A will stay on this version of the grammar for the foreseeable future. Also, it is not possible to start the interpreter A in standalone mode to build numpy with it, as it needs some other libraries preloaded I know that this whole thing is a mess, but I'd appreciate any help I can get to make this work. If you need more information, please let me know, I'd be happy to oblige. Thanks to everybody for their time in advance.

    Read the article

  • CSS styles are not applied to elements added to JavaFX component tree

    - by pazabo
    I have applied CSS style to JavaFX components and it looks like everything is working fine except one situation: when I add JavaFX components to component tree on-the-fly their CSS styles are not applied. For example following code: package test; import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.shape.Rectangle; import javafx.scene.input.MouseEvent; import javafx.util.Math; import javafx.scene.paint.Color; function getRect(): Rectangle { return Rectangle { x: 230 * Math.random() y: 60 * Math.random() width: 20, height: 20 styleClass: "abc" } } def stage: Stage = Stage { scene: Scene { width: 250, height: 80 stylesheets: "{__DIR__}main.css" content: [ Rectangle { x: 0, y: 0, width: 250, height: 80 fill: Color.WHITE onMouseClicked: function (evt: MouseEvent): Void { insert getRect() into stage.scene.content; } } getRect() ] } } with following stylesheet: .abc { fill: red; } in main.css file (both in test package) display red square on white background, but after clicking the main rectangle black (not red) squares are added to scene. I noticed that: Components added dynamically look just like style information was not applied. If you set their style in JavaFX code then everything works fine. After changing stylesheets property (so that it points to another valid stylesheet) the objects already added render properly. Does anyone know the solution to this problem? I could of course put all the properties into JavaFX code or provide another stylesheet (for every existing stylesheed) that would contain the same data and change stylesheet right after adding any component, but I would like to find some elegant solution. Thanks in advance.

    Read the article

  • XML problem in the basic menu example

    - by arakn0
    Hi there, I am trying to create an app with some menus, an I am following the basic example available in the official android site: http://developer.android.com/guide/topics/ui/menus.html My problems appear when I define the menu in the XML. After creating the folder res/menu and creating the menu_option.xml file from eclipse.... The project (in general) gives an error that can be read from the Problems tab: Unparsed aapt error(s)! Check the console for output Android Packaging Problem So, changing to the Console tab to get more information about the problem, this can be read: [2010-06-02 11:35:54 - TestAudio] Error in an XML file: aborting build. [2010-06-02 11:35:54 - TestAudio] W/ResourceType(11566): Bad XML block: header size 63327 or total size -144759824 is larger than data size 0 [2010-06-02 11:35:54 - TestAudio] /home/User/workspace/TestAudio/res/menu/options_menu.xml:1: error: Error parsing XML: no element found The strange thing is that eclipse recognizes the menu items that I've defined in the XML,I can reference them in the code with no problems and my main activity builds. (and the rest of the files too). Could it be that when eclipse creates a file, for some reason, the Android SDK has problems to read it, or something similar?? The XML code is exactly the same as the one in the example, so I don't really know what is happening. The code in options_menu.xml is this: <menu xmlns:android="http://schemas.android.com/apk/res/android" <item android:id="@+id/new_game" android:title="New Game" / <item android:id="@+id/quit" android:title="Quit" / </menu Thanks in advance for your help!

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • Is there anyway to carry a value in php forward to a second page?

    - by Henry Aspden
    I have created a php site, and previously it was listing only products with defined values. I have now changed it to include an array of products for example all products WHERE id = "spotlights" and this works great so it means I can add new products just to the database, but I still have to add the second page manually. e.g going from the product div on the main page, through to www.example.com/spotlight_1.php Is there anyway in PHP to carry the data from my index.php e.g. the ID through to the next page? so that I can have a template product.php page, and I can use a database pull to echo the product information required. So on index.php i click on the product with ID="1" and on the product.php page, it loads the relevant data for product 1. I can write the php SQL/mySQL calls myself, its just the way to carry accross a value from the previous page which I dont understand Regards Henry p.s. all the IDs and things are stored in the database already as 1 to 3digit values e.g. 3 or or 93 or 254 Any advice as always is greatly appreciated Regards Henry

    Read the article

  • How to parse out base file name using Script-Fu

    - by ongle
    Using Gimp 2.6.6 for MAC OS X (under X11) as downloaded from gimp.org. I'm trying to automate a boring manual process with Script-Fu. I needed to parse the image file name to save off various layers as new files using a suffix on the original file name. My original attempts went like this but failed because (string-search ...) doesn't seem to be available under 2.6 (a change to the scripting engine?). (set! basefilename (substring filename 0 (string-search "." filename))) Then I tried to use this information to parse out the base file name using regex but (re-match-nth ...) is not recognized either. (if (re-match "^(.*)[.]([^.]+)$" filename buffer) (set! basefilename (re-match-nth orig-name buffer 1)) ) And while pulling the value out of the vector ran without error, the resulting value is not considered a string when it is passed into (string-append ...). (if (re-match "^(.*)[.]([^.]+)$" filename buffer) (set! basefilename (vector-ref buffer 1)) ) So I guess my question is, how would I parse out the base file name?

    Read the article

  • Auto-resolving a hostname in WCF Metadata Publishing

    - by Mike C
    I am running a self-hosted WCF service. In the service configuration, I am using localhost in my BaseAddresses that I hook my endpoints to. When trying to connect to an endpoint using the WCF test client, I have no problem connecting to the endpoint and getting the metadata using the machine's name. The problem that I run into is that the client that is generated from metadata uses localhost in the endpoint URLs it wants to connect to. I'm assuming that this is because localhost is the endpoint URL published by metadata. As a result, any calls to the methods on the service will fail since localhost on the calling machine isn't running the service. What I would like to figure out is if it is possible for the service metadata to publish the proper URL to a client depending on the client who is calling it. For example, if I was requesting the service metadata from a machine on the same network as the server the endpoint should be net.tcp://MYSERVER:1234/MyEndpoint. If I was requesting it from a machine outside the network, the URL should be net.tcp://MYSERVER.mydomain.com:1234/MyEndpoint. And obviously if the client was on the same machine, THEN the URL could be net.tcp://localhost:1234/MyEndpoint. Is this just a flaw in the default IMetadataExchange contract? Is there some reason the metadata needs to publish the information in a non-contextual way? Is there another way I should be configuring my BaseAddresses in order to get the functionality I want? Thanks, Mike

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • t-sql most efficient row to column? crosstab for xml path, pivot

    - by ajberry
    I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema below, but concept is similar) in both fixed width and delimited formats. The below FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I've looked at pivot but most of the examples I have found are aggregating information. I just want to combine the child rows and tack them onto the parent. I should also point out I don't need to deal with the column names either since the output of the child rows will either be a fixed width string or a delimited string. For example, given the following tables: OrderId CustomerId ----------- ----------- 1 1 2 2 3 3 DetailId OrderId ProductId ----------- ----------- ----------- 1 1 100 2 1 158 3 1 234 4 2 125 5 3 101 6 3 105 7 3 212 8 3 250 for an order I need to output: orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250 or orderid Products ----------- ----------------------- 1 100|158|234 2 125 3 101|105|212|250 Thoughts or suggestions? I am using SQL Server 2k5. Example Setup: create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 using FOR XML PATH: select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o which outputs what I want, however is very slow for large amounts of data. One of the child tables is over 2 million rows, pushing the processing time out to ~ 4 hours. orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250

    Read the article

  • Android - Lifecycle and saving an Instance State questions

    - by The Salt
    So within my application is a form for creating a new user, with relevant details and information about the user. There's no problems there, it's just what happens when the user leaves the activity without pressing the confirm button. Here's what I want to do: If the user presses the back button, attempt to save all the data to the database and inform the user. If the activity is interrupted (ie by a phone call), save all the data into a temporary location so when the activity is at the top of the stack again, nothing appears to have changed (but the data still hasn't yet been saved to the database). If the activity gets killed for more resources when in the background, do the same as point 2 above (ie when the activity is started again, it appears that nothing has changed). If the whole application is started again (by clicking on the icon again) and there is temporary data stored from either points 2 or 3 above, navigate to the "create user" activity and display data as if nothings changed. Here's how I'm currently trying to do it: Use onDestroy() and isFinishing() functions to find when the activity is being killed, to cover point 1 above (to then try and save all data). Save all data with onSaveInstanceState into a bundle (to cover point 2 above) Does the bundle created with onSaveInstanceState survive the activity being killed for more resources, so when its recreated the previous state can be retrieved (as in point 3 above)? No idea how to implement point 4. Any help would be massively appreciated. Cheers!

    Read the article

  • iPhone OS: Is there a way to set up KVO between two ManagedObject Entities?

    - by nickthedude
    I have 2 entities I want to link with KVO, one a single statTracker class that keeps track of different stats and the other an achievement class that contains information about achievements. Ideally what I want to be able to do is set up KVO by having an instance of the achievement class observe a value on the statTracker class and also set up a threshold value at which the achievement instance should be "triggered"(triggering in this case would mean showing a UIAlertView and changing a property on the achievement class.) I'd like to also set these relationships up on instantiation of the achievement class if possible so kind of like this: Achievement *achievement1 = (Achievement *)[NSEntityDescription insertNewObjectForEntityForName:@"Achievement" inManagedObjectContext:[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext]]; [achievement1 setAchievementName:@"2 time launcher"]; [achievement1 setAchievementDescription:@"So you've decided to come back for more eh? Here are some achievement points to get you going"]; [achievement1 setAchievementPoints:[NSNumber numberWithInt:300]; [achievement1 setObjectToObserve:@"statTrackerInstace" propertyToObserve:@"timesLaunched" valueOfPropertToSatisfyAchievement:2] Anyone out there know how I would set this up? Is there some way I could do this by way of relationships that I'm not seeing? Thanks, Nick

    Read the article

< Previous Page | 814 815 816 817 818 819 820 821 822 823 824 825  | Next Page >