Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 596/654 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • Advanced SQL query with lots of joins

    - by lund.mikkel
    Hey fellow programmers Okay, first let me say that this is a hard one. I know the presentation may be a little long. But I how you'll bare with me and help me through anyway :D So I'm developing on an advanced search for bicycles. I've got a lot of tables I need to join to find all, let's say, red and brown bikes. One bike may come in more then one color! I've made this query for now: SELECT DISTINCT p.products_id, #simple product id products_name, #product name products_attributes_id, #color id pov.products_options_values_name #color name FROM products p LEFT JOIN products_description pd ON p.products_id = pd.products_id INNER JOIN products_attributes pa ON pa.products_id = p.products_id LEFT JOIN products_options_values pov ON pov.products_options_values_id = pa.options_values_id LEFT JOIN products_options_search pos ON pov.products_options_values_id = pos.products_options_values_id WHERE pos.products_options_search_id = 4 #code for red OR pos.products_options_search_id = 5 #code for brown My first concern is the many joins. The Products table mainly holds product id and it's image and the Products Description table holds more descriptive info such as name (and product ID of course). I then have the Products Options Values table which holds all the colors and their IDs. Products Options Search is containing the color IDs along with a color group ID (products_options_search_id). Red has the color group code 4 (brown is 5). The products and colors have a many-to-many relationship managed inside Products Attributes. So my question is first of all: Is it okay to make so many joins? Is i hurting the performance? Second: If a bike comes in both red and brown, it'll show up twice even though I use SELECT DISTINCT. Think this is because of the INNER JOIN. Is this possible to avoid and do I have to remove the doubles in my PHP code? Third: Bikes can be double colored (i.e. black and blue). This means that there are two rows for that bike. One where it says the color is black and one where is says its blue. (See second question). But if I replace the OR in the WHERE clause it removes both rows, because none of them fulfill the conditions - only the product. What is the workaround for that? I really hope you will and can help me. I'm a little desperate right now :D Regards Mikkel Lund

    Read the article

  • Which style is preferable when writing this boolean expression?

    - by Jeppe Stig Nielsen
    I know this question is to some degree a matter of taste. I admit this is not something I don't understand, it's just something I want to hear others' opinion about. I need to write a method that takes two arguments, a boolean and a string. The boolean is in a sense (which will be obvious shortly) redundant, but it is part of a specification that the method must take in both arguments, and must raise an exception with a specific message text if the boolean has the "wrong" value. The bool must be true if and only if the string is not null or empty. So here are some different styles to write (hopefully!) the same thing. Which one do you find is the most readable, and compliant with good coding practice? // option A: Use two if, repeat throw statement and duplication of message string public void SomeMethod(bool useName, string name) { if (useName && string.IsNullOrEmpty(name)) throw new SomeException("..."); if (!useName && !string.IsNullOrEmpty(name)) throw new SomeException("..."); // rest of method } // option B: Long expression but using only && and || public void SomeMethod(bool useName, string name) { if (useName && string.IsNullOrEmpty(name) || !useName && !string.IsNullOrEmpty(name)) throw new SomeException("..."); // rest of method } // option C: With == operator between booleans public void SomeMethod(bool useName, string name) { if (useName == string.IsNullOrEmpty(name)) throw new SomeException("..."); // rest of method } // option D1: With XOR operator public void SomeMethod(bool useName, string name) { if (!(useName ^ string.IsNullOrEmpty(name))) throw new SomeException("..."); // rest of method } // option D2: With XOR operator public void SomeMethod(bool useName, string name) { if (useName ^ !string.IsNullOrEmpty(name)) throw new SomeException("..."); // rest of method } Of course you're welcome to suggest other possibilities too. Message text "..." would be something like "If 'useName' is true a name must be given, and if 'useName' is false no name is allowed".

    Read the article

  • How do I display a jquery dialog box before the entire page is loaded?

    - by obarshay
    On my site a number of operations can take a long time to complete. When I know a page will take a while to load, I would like to display a progress indicator while the page is loading. Ideally I would like to say something along the lines of: $("#dialog").show("progress.php"); and have that overlay on top of the page that is being loaded (disappearing after the operation is completed). Coding the progress bar and displaying progress is not an issue, the issue is getting a progress indicator to pop up WHILE the page is being loaded. I have been trying to use JQuery's dialogs for this but they only appear after the page is already loaded. This has to be a common problem but I am not familiar enough with JavaScript to know the best way to do this. Here's simple example to illustrate the problem. The code below fails to display the dialog box before the 20 second pause is up. I have tried in Chrome and Firefox. In fact I don't even see the "Please Wait..." text. Here's the code I am using: <html> <head> <link type="text/css" href="http://jqueryui.com/latest/themes/base/ui.all.css" rel="stylesheet" /> <script type="text/javascript" src="http://jqueryui.com/latest/jquery-1.3.2.js"></script> <script type="text/javascript" src="http://jqueryui.com/latest/ui/ui.core.js"></script> <script type="text/javascript" src="http://jqueryui.com/latest/ui/ui.dialog.js"></script> </head> <body> <div id="please-wait">My Dialog</div> <script type="text/javascript"> $("#please-wait").dialog(); </script> <?php flush(); echo "Waiting..."; sleep(20); ?> </body> </html>

    Read the article

  • Keeping track of business rules within IT department?

    - by evaldas-alexander
    I am looking for the best way to keep track of the business rules for both developers and everybody else (support staff / management) in a startup enviroment. The challenge is that our business model requires quite a lot of different business rules, which are created pretty much on the fly and evolving organically after that. After running this project for 3+ years, we have so many of such rules that often the only way to be sure about what the application is supposed to do in a certain situation is to go find the module responsible for that process and analyze its code and comments. That is all fine as long as you have one single developer who created the entire application from the scratch, but every new developer needs to go over pretty much entire codebase in order to understand how the application works. Even bigger problem is that non technical employees don't even have that option and therefore are forced to ask me pretty much every day how some certain case would be handled by the application. Quick example - we only start charging for our customer campaigns once they have been active for at least 72 hours, but at the same time we stop creating invoices for campaigns that belong to insolvent accounts and close such accounts within a month of the first failed charge. That does not apply to accounts that are set to "non-chargeable" which most commonly belongs to us since we are using the service ourselves. The invoices are created on the 1st of each month and include charges from the previous month + any current balance that the account might have. However, some customers are charged only 4 days after their invoice has been generated due to issues with their billing department. In addition to that, invoices are also created when customer deactivates his campaign, but that can only be done once the campaign is not longer under mandatory 6 month contract, unless account manager approves early deactivation. I know, that's quite a lot of rules that need to be taken into account when answering a question "when do we bill our customers", but actually I could still append an asterisk at the end of each sentence in order to disclose some rare exceptions. Of course, it would be easiest just to keep the business rules to the minimum, but we need to adapt to changing marketplace - i.e. less than a year ago we had no contracts whatsoever. One idea that I had so far was a simplistic wiki with categories corresponding to areas such as "Account activation", "Invoicing", "Collection procedures" and so on. Another idea would be to have giant interactive flowchart showing the entire customer "life cycle" from prospecting to account deactivation. What are your experiences / suggestions?

    Read the article

  • Using one data source across multiple views in Kendo UI SPA

    - by user3731783
    I am trying to build a Kendo UI SPA. I have two views. View 1 (appListView) shows Application Details in a grid and view 2 (activityView) will have a dropdown for application names and a grid that shows the activity for selected application As I am loading all the application details on the loading of view 1, I would like to re-use those details to populate the dropdown on view 2. Please see my code below. Everything works fine but when I go to View 2 it makes a call to the service again to get application details. I would like to use the existing data if it is already loaded and if the uses comes to view 2 directly then it should get application data also. I am not sure what I am missing in the code. View Markup: <script id="appListView" type="text/x-kendo-template"> <h3 data-bind="html: displayName"></h3> <div data-role="grid" data-editable="{'mode':'popup'}" data-bind="source: items" data-columns="[ {'field': 'Name'}, {'field': 'ContactEmail','title':'Contact Email'} ]"> </div> </script> <script id="" type="text\x-kendo-template"> <div> Activity for Application&nbsp;&nbsp; <input name="AppName" data-role="dropdownlist" data-source="appsModel.items" data-text-field="Name" data-value-field="Id" data-option-label="Choose an application name" style="width:250px;" /> </div> <div id="Activities" data-role="grid" data-bind="source: items" data-auto-bind="false" data-columns="[ {'field': 'Domain','title':'Domain'}, {'field': 'ActivityType','title':'Activity Type'} ]"> </div> </script> js with DataSource and View Model: //data sources var applications = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, serverFiltering : true, transport: { read: { url: '/api/App', dataType: 'json', type:'GET' } } }); var activities = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, transport: { read: { url: '/api/Activity', dataType: 'json', type: 'GET' }, parameterMap: function (data, type) { if (type == "read") { return 'appId=' + $("#AppName").val() ; } } } }); //Models var appsModel = kendo.observable({ items: applications, displayName: 'My Applications' }); var activityModel = kendo.observable({ items: activities, onAppChange: function(t){ $("#Activities").data("kendoGrid").dataSource.read(); }, dispayName: 'Application Activities' }); //views var layout = new kendo.Layout("layout-template"); var appListView = new kendo.View("appListView", { model: appsModel }); var activityView = new kendo.View("activityView", { model: activityModel }); Thank you for taking time to read this long question.

    Read the article

  • Rails & Twilio: Receiving nil when storing texts received from Twilio

    - by Jon Smooth
    I have set up the request URL in my Twilio account to have it POST to: myurl.com/receivetext. It appears to be successfully posting because when I check the database using the Heroku console I see the following: Post id: 5, body: nil, from: nil, created_at: "2012-06-14 17:28:01", updated_at: "2012-06-14 17:28:01" Why is it receiving nil for the body and from attributes? I can't figure out what I'm doing wrong! The created and updated at are storing successfully but the two attributes that I care about continue to be stored as nil. Here's the Receive Text controller which is receiving the Post request from Twilio: class ReceiveTextController < ApplicationController def index @post=Post.create!(body: params[:Body], from: params[:From]) end end EDIT: When I dump the params I receive the following: "{\"controller\"=\"receive_text\", \"action\"=\"index\"}" I attained this by inserting the following into my ReceiveText controller. @params = Post.create!(body: params.inspect, from: "Dumping Params") and then opening up the Heroku console to find the database entry with from = "Dumping Params". I simulated a Twilio request with a curl with the following command curl -X POST myurl.com/receivetext route -d 'AccountSid=AC123&From=%2B19252411234' I checked the production database again and noticed that the curl request did work when obtaining the FROM atribute. It stored the following: params.inspect returned "{\"AccountSid\"=\"AC123\", \"From\"=\"+19252411234\", \"co..." I received a comment stating: "As long as twilio is hitting the same URL with the same method (GET/POST) it should be filling the params array as well" I have no idea how to make this comment actionable. Any help would be greatly appreciated! I'm very new to rails. Here's my database migration (I have both attributes set to string. I have tried setting it to text and that didn't work either) : class CreatePosts < ActiveRecord::Migration def change create_table :posts do |t| t.string :body t.string :from t.timestamps end end end Here is my Post model: class Post < ActiveRecord::Base attr_accessible :body, :from end Routes (everything appears to be routing just fine) : MovieApp::Application.routes.draw do get "receive_text/index" get "pages/home" get "send_text/send_text_message" root to: 'pages#home' match '/receivetext', to: 'receive_text#index' match '/pages/home', to: 'pages#home' match '/sendtext', to: 'send_text#send_text_message' end Here's my gemfile (incase it helps) source 'https://rubygems.org' gem 'rails', '3.2.3' gem 'badfruit' gem 'twilio-ruby' gem 'logger' gem 'jquery-rails' group :production do gem 'pg' end group :development, :test do gem 'sqlite3' end group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'uglifier', '>= 1.0.3' end

    Read the article

  • On Redirect - Failed to generate a user instance of SQL Server...

    - by Craig Russell
    Hello (this is a long post sorry), I am writing a application in ASP.NET MVC 2 and I have reached a point where I am receiving this error when I connect remotely to my Server. Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. I thought I had worked around this problem locally, as I was getting this error in debug when site was redirected to a baseUrl if a subdomain was invalid using this code: protected override void Initialize(RequestContext requestContext) { string[] host = requestContext.HttpContext.Request.Headers["Host"].Split(':'); _siteProvider.Initialise(host, LiveMeet.Properties.Settings.Default["baseUrl"].ToString()); base.Initialize(requestContext); } protected override void OnActionExecuting(ActionExecutingContext filterContext) { if (Site == null) { string[] host = filterContext.HttpContext.Request.Headers["Host"].Split(':'); string newUrl; if (host.Length == 2) newUrl = "http://sample.local:" + host[1]; else newUrl = "http://sample.local"; Response.Redirect(newUrl, true); } ViewData["Site"] = Site; base.OnActionExecuting(filterContext); } public Site Site { get { return _siteProvider.GetCurrentSite(); } } The Site object is returned from a Provider named siteProvider, this does two checks, once against a database containing a list of all available subdomains, then if that fails to find a valid subdomain, or valid domain name, searches a memory cache of reserved domains, if that doesn't hit then returns a baseUrl where all invalid domains are redirected. locally this worked when I added the true to Response.Redirect, assuming a halting of the current execution and restarting the execution on the browser redirect. What I have found in the stack trace is that the error is thrown on the second attempt to access the database. #region ISiteProvider Members public void Initialise(string[] host, string basehost) { if (host[0].Contains(basehost)) host = host[0].Split('.'); Site getSite = GetSites().WithDomain(host[0]); if (getSite == null) { sites.TryGetValue(host[0], out getSite); } _site = getSite; } public Site GetCurrentSite() { return _site; } public IQueryable<Site> GetSites() { return from p in _repository.groupDomains select new Site { Host = p.domainName, GroupGuid = (Guid)p.groupGuid, IsSubDomain = p.isSubdomain }; } #endregion The Linq query ^^^ is hit first, with a filter of WithDomain, the error isn't thrown till the WithDomain filter is attempted. In summary: The error is hit after the page is redirected, so the first iteration is executing as expected (so permissions on the database are correct, user profiles etc) shortly after the redirect when it filters the database query for the possible domain/subdomain of current redirected page, it errors out.

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • How to define an extern, C struct returning function in C++ using MSVC?

    - by DK
    The following source file will not compile with the MSVC compiler (v15.00.30729.01): /* stest.c */ #ifdef __cplusplus extern "C" { #endif struct Test; extern struct Test make_Test(int x); struct Test { int x; }; extern struct Test make_Test(int x) { struct Test r; r.x = x; return r; } #ifdef __cplusplus } #endif Compiling with cl /c /Tpstest.c produces the following error: stest.c(8) : error C2526: 'make_Test' : C linkage function cannot return C++ class 'Test' stest.c(6) : see declaration of 'Test' Compiling without /Tp (which tells cl to treat the file as C++) works fine. The file also compiles fine in DigitalMars C and GCC (from mingw) in both C and C++ modes. I also used -ansi -pedantic -Wall with GCC and it had no complaints. For reasons I will go into below, we need to compile this file as C++ for MSVC (not for the others), but with functions being compiled as C. In essence, we want a normal C compiler... except for about six lines. Is there a switch or attribute or something I can add that will allow this to work? The code in question (though not the above; that's just a reduced example) is being produced by a code generator. As part of this, we need to be able to generate floating point nans and infinities as constants (long story), meaning we have to compile with MSVC in C++ mode in order to actually do this. We only found one solution that works, and it only works in C++ mode. We're wrapping the code in extern "C" {...} because we want to control the mangling and calling convention so that we can interface with existing C code. ... also because I trust C++ compilers about as far as I could throw a smallish department store. I also tried wrapping just the reinterpret_cast line in extern "C++" {...}, but of course that doesn't work. Pity. There is a potential solution I found which requires reordering the declarations such that the full struct definition comes before the function foward decl., but this is very inconvenient due to the way the codegen is performed, so I'd really like to avoid having to go down that road if I can.

    Read the article

  • C# - calling ext. DLL function containing Delphi "variant record" parameter

    - by CaldonCZE
    Hello, In external (Delphi-created) DLL I've got the following function that I need to call from C# application. function ReadMsg(handle: longword; var Msg: TRxMsg): longword; stdcall; external 'MyDll.dll' name 'ReadMsg'; The "TRxMsg" type is variant record, defined as follows: TRxMsg = record case TypeMsg: byte of 1: (accept, mask: longword); 2: (SN: string[6]); 3: (rx_rate, tx_rate: word); 4: (rx_status, tx_status, ctl0, ctl1, rflg: byte); end; In order to call the function from C#, I declared auxiliary structure "my9Bytes" containing array of bytes and defined that it should be marshalled as 9 bytes long array (which is exactly the size of the Delphi record). private struct my9Bytes { [MarshalAs(UnmanagedType.ByValArray, ArraySubType = UnmanagedType.U1, SizeConst = 9)] public byte[] data; } Then I declared the imported "ReadMsg" function, using the "my9bytes" struct. [DllImport("MyDll.dll")] private static extern uint ReadMsg(uint handle, ref my9Bytes myMsg); I can call the function with no problem... Then I need to create structure corresponding to the original "TRxMsg" variant record and convert my auxiliary "myMsg" array into this structure. I don't know any C# equivalent of Delphi variant array, so I used inheritance and created the following classes. public abstract class TRxMsg { public byte typeMsg; } public class TRxMsgAcceptMask:TRxMsg { public uint accept, mask; //... } public class TRxMsgSN:TRxMsg { public string SN; //... } public class TRxMsgMRate:TRxMsg { public ushort rx_rate, tx_rate; //... } public class TRxMsgStatus:TRxMsg { public byte rx_status, tx_status, ctl0, ctl1, rflg; //... } Finally I create the appropriate object and initialize it with values manually converted from "myMsg" array (I used BitConverter for this). This does work fine, this solution seems to me a little too complicated, and that it should be possible to do this somehow more directly, without the auxiliary "my9bytes" structures or the inheritance and manual converting of individual values. So I'd like to ask you for a suggestions for the best way to do this. Thanks a lot!

    Read the article

  • Generic linked list in c++

    - by itsaboy
    I have been struggling for too long a time now with a rather simple question about how to create a generic linked list in c++. The list should be able contain several types of structs, but each list will only contain one type of struct. The problem arises when I want to implement the getNode() function [see below], because then I have to specify which of the structs it should return. I have tried to substitute the structs with classes, where the getNode function returns a base class that is inherited by all the other classes, but it still does not do the trick, since the compiler does not allow the getNode function to return anything but the base class then. So here is some code snippet: typedef struct struct1 { int param1; (...) } struct1; typedef struct struct2 { double param1; (...) } struct2; typedef struct node { struct1 data; node* link; } node; class LinkedList { public: node *first; int nbrOfNodes; LinkedList(); void addNode(struct1); struct1 getNode(); bool isEmpty(); }; LinkedList::LinkedList() { first = NULL; nbrOfNodes = 0; } void LinkedList::addNode(struct1 newData) { if (nbrOfNodes == 0) { first = new node; first->data = newData; } else { node *it = first; for (int i = 0; i < nbrOfNodes; i++) { it = it->link; } node *newNode = new node; newNode->data = newData; it->link = newNode; } nbrOfNodes++; } bool LinkedList::isEmpty() { return !nbrOfNodes; } struct1 LinkedList::getNode() { param1 returnData = first->data; node* deleteNode = first; nbrOfNodes--; if (nbrOfNodes) first = deleteNode->link; delete deleteNode; return returnData; } So the question, put in one sentence, is as follows: How do I adjust the above linked list class so that it can also be used for struct2, without having to create a new almost identical list class for struct2 objects? As I said above, each instance of LinkedList will only deal with either struct1 or struct2. Grateful for hints or help

    Read the article

  • ggplot geom_bar - to many bars

    - by Andreas
    I am sorry for the non-informative title. exstatus <- structure(list(org = structure(c(2L, 1L, 7L, 3L, 6L, 2L, 2L, 7L, 2L, 1L, 2L, 2L, 4L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 8L, 4L, 2L, 2L, 5L, 7L, 8L, 6L, 2L, 7L, 2L, 2L, 7L, 2L, 2L, 2L, 2L, 4L, 2L, 2L, 1L, 2L, 2L, 7L, 2L, 7L, 2L, 4L, 7L, 2L, 4L, 2L, 2L, 2L, 7L, 7L, 2L, 7L, 2L, 7L, 1L, 7L, 5L, 2L, 2L, 1L, 7L, 3L, 5L, 3L, 2L, 2L, 2L, 7L, 4L, 2L, 7L, 2L, 4L, 2L, 2L, 2L, 4L, 6L, 2L, 4L, 4L, 7L, 2L, 2L, 2L, 7L, 6L, 2L, 2L, 1L, 2L, 2L, 4L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 6L, 2L, 7L, 7L, 2L, 7L, 8L, 7L, 8L, 6L, 7L, 2L, 2L, 2L, 7L, 2L, 7L, 2L, 7L, 2L, 7L, 2L, 4L, 2L, 7L, 2L, 4L, 8L, 2L, 3L, 4L, 2L, 7L, 3L, 8L, 8L, 6L, 2L, 2L, 2L, 7L, 7L, 7L, 7L, 2L, 2L, 2L, 2L, 7L, 2L, 4L, 7L, 7L, 8L, 2L, 7L, 2L, 2L, 2L, 2L, 1L, 7L, 7L, 2L, 1L, 7L, 2L, 7L, 7L, 2L, 2L, 7L, 2L, 2L, 7L, 7L, 2L, 7L, 2L, 7L, 5L, 2L), .Label = c("gl", "il", "gm", "im", "gk", "ik", "tv", "tu"), class = "factor"), art = structure(c(2L, 1L, 3L, 1L, 2L, 2L, 2L, 3L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 2L, 2L, 2L, 1L, 3L, 3L, 2L, 2L, 3L, 2L, 2L, 3L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 3L, 2L, 3L, 2L, 2L, 3L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 2L, 3L, 2L, 3L, 1L, 3L, 1L, 2L, 2L, 1L, 3L, 1L, 1L, 1L, 2L, 2L, 2L, 3L, 2L, 2L, 3L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 2L, 2L, 2L, 3L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 3L, 3L, 2L, 3L, 3L, 3L, 3L, 2L, 3L, 2L, 2L, 2L, 3L, 2L, 3L, 2L, 3L, 2L, 3L, 2L, 2L, 2L, 3L, 2L, 2L, 3L, 2L, 1L, 2L, 2L, 3L, 1L, 3L, 3L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 2L, 2L, 2L, 2L, 3L, 2L, 2L, 3L, 3L, 3L, 2L, 3L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 2L, 1L, 3L, 2L, 3L, 3L, 2L, 2L, 3L, 2L, 2L, 3L, 3L, 2L, 3L, 2L, 3L, 1L, 2L), .Label = c("Finish", "Attending", "Something"), class = "factor"), type = structure(c(2L, 2L, 5L, 3L, 1L, 2L, 2L, 5L, 2L, 2L, 2L, 2L, 3L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 4L, 3L, 2L, 2L, 1L, 5L, 4L, 1L, 2L, 5L, 2L, 2L, 5L, 2L, 2L, 2L, 2L, 3L, 2L, 2L, 2L, 2L, 2L, 5L, 2L, 5L, 2L, 3L, 5L, 2L, 3L, 2L, 2L, 2L, 5L, 5L, 2L, 5L, 2L, 5L, 2L, 5L, 1L, 2L, 2L, 2L, 5L, 3L, 1L, 3L, 2L, 2L, 2L, 5L, 3L, 2L, 5L, 2L, 3L, 2L, 2L, 2L, 3L, 1L, 2L, 3L, 3L, 5L, 2L, 2L, 2L, 5L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 5L, 5L, 2L, 5L, 4L, 5L, 4L, 1L, 5L, 2L, 2L, 2L, 5L, 2L, 5L, 2L, 5L, 2L, 5L, 2L, 3L, 2L, 5L, 2L, 3L, 4L, 2L, 3L, 3L, 2L, 5L, 3L, 4L, 4L, 1L, 2L, 2L, 2L, 5L, 5L, 5L, 5L, 2L, 2L, 2L, 2L, 5L, 2L, 3L, 5L, 5L, 4L, 2L, 5L, 2L, 2L, 2L, 2L, 2L, 5L, 5L, 2L, 2L, 5L, 2L, 5L, 5L, 2L, 2L, 5L, 2L, 2L, 5L, 5L, 2L, 5L, 2L, 5L, 1L, 2L), .Label = c("short", "long", "between", "young", "old"), class = "factor")), .Names = c("org", "art", "type"), row.names = c(NA, -192L), class = "data.frame") and then the plot ggplot(exstatus, aes(x=type, fill=art))+ geom_bar(aes(y=..count../sum(..count..)),position="dodge") The problem is that the two rightmost bars ("young", "old") are too thick - "something" takes up the whole width - whcih is not what I intended. I am sorry that I can not explain it better.

    Read the article

  • Google Map key in Android?

    - by Amandeep singh
    I am developing an android app in which i have to show map view i have done it once in a previous app but the key i used in the previous is not working int his app . It is just showing a pin in the application with blank screen. Do i have to use a different Map key for each project , If not Kindly help me how can i use my previous Key in this. and also I tried generating a new key but gave the the same key back . Here is the code i used public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.map); btn=(Button)findViewById(R.id.mapbtn); str1=getIntent().getStringExtra("LATITUDE"); str2=getIntent().getStringExtra("LONGITUDE"); mapView = (MapView)findViewById(R.id.mapView1); //View zoomView = mapView.getZoomControls(); mapView.setBuiltInZoomControls(true); //mapView.setSatellite(true); mc = mapView.getController(); btn.setOnClickListener(this); MapOverlay mapOverlay = new MapOverlay(); List<Overlay> listOfOverlays = mapView.getOverlays(); listOfOverlays.clear(); listOfOverlays.add(mapOverlay); String coordinates[] = {str1, str2}; double lat = Double.parseDouble(coordinates[0]); double lng = Double.parseDouble(coordinates[1]); p = new GeoPoint( (int) (lat * 1E6), (int) (lng * 1E6)); mc.animateTo(p); mc.setZoom(17); mapView.invalidate(); //mp.equals(o); } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } class MapOverlay extends com.google.android.maps.Overlay { @Override public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { super.draw(canvas, mapView, shadow); Paint mPaint = new Paint(); mPaint.setDither(true); mPaint.setColor(Color.RED); mPaint.setStyle(Paint.Style.FILL_AND_STROKE); mPaint.setStrokeJoin(Paint.Join.ROUND); mPaint.setStrokeCap(Paint.Cap.ROUND); mPaint.setStrokeWidth(2); //---translate the GeoPoint to screen pixels--- Point screenPts = new Point(); mapView.getProjection().toPixels(p, screenPts); //---add the marker--- Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.pin); canvas.drawBitmap(bmp, screenPts.x, screenPts.y-50, null); return true; } Thanks....

    Read the article

  • How to access a matrix in a matlab struct's field from a mex function?

    - by B. Ruschill
    I'm trying to figure out how to access a matrix that is stored in a field in a matlab structure from a mex function. That's awfully long winded... Let me explain: I have a matlab struct that was defined like the following: matrixStruct = struct('matrix', {4, 4, 4; 5, 5, 5; 6, 6 ,6}) I have a mex function in which I would like to be able to receive a pointer to the first element in the matrix (matrix[0][0], in c terms), but I've been unable to figure out how to do that. I have tried the following: /* Pointer to the first element in the matrix (supposedly)... */ double *ptr = mxGetPr(mxGetField(prhs[0], 0, "matrix"); /* Incrementing the pointer to access all values in the matrix */ for(i = 0; i < 3; i++){ printf("%f\n", *(ptr + (i * 3))); printf("%f\n", *(ptr + 1 + (i * 3))); printf("%f\n", *(ptr + 2 + (i * 3))); } What this ends up printing is the following: 4.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 I have also tried variations of the following, thinking that perhaps it was something wonky with nested function calls, but to no avail: /* Pointer to the first location of the mxArray */ mxArray *fieldValuePtr = mxGetField(prhs[0], 0, "matrix"); /* Get the double pointer to the first location in the matrix */ double *ptr = mxGetPr(fieldValuePtr); /* Same for loop code here as written above */ Does anyone have an idea as to how I can achieve what I'm trying to, or what I am potentially doing wrong? Thanks! Edit: As per yuk's comment, I tried doing similar operations on a struct that has a field called array which is a one-dimensional array of doubles. The struct containing the array is defined as follows: arrayStruct = struct('array', {4.44, 5.55, 6.66}) I tried the following on the arrayStruct from within the mex function: mptr = mxGetPr(mxGetField(prhs[0], 0, "array")); printf("%f\n", *(mptr)); printf("%f\n", *(mptr + 1)); printf("%f\n", *(mptr + 2)); ...but the output followed what was printed earlier: 4.440000 0.000000 0.000000

    Read the article

  • Memory management with Objective-C Distributed Objects: my temporary instances live forever!

    - by jkp
    I'm playing with Objective-C Distributed Objects and I'm having some problems understanding how memory management works under the system. The example given below illustrates my problem: Protocol.h #import <Foundation/Foundation.h> @protocol DOServer - (byref id)createTarget; @end Server.m #import <Foundation/Foundation.h> #import "Protocol.h" @interface DOTarget : NSObject @end @interface DOServer : NSObject < DOServer > @end @implementation DOTarget - (id)init { if ((self = [super init])) { NSLog(@"Target created"); } return self; } - (void)dealloc { NSLog(@"Target destroyed"); [super dealloc]; } @end @implementation DOServer - (byref id)createTarget { return [[[DOTarget alloc] init] autorelease]; } @end int main() { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; DOServer *server = [[DOServer alloc] init]; NSConnection *connection = [[NSConnection new] autorelease]; [connection setRootObject:server]; if ([connection registerName:@"test-server"] == NO) { NSLog(@"Failed to vend server object"); } else [[NSRunLoop currentRunLoop] run]; [pool drain]; return 0; } Client.m #import <Foundation/Foundation.h> #import "Protocol.h" int main() { unsigned i = 0; for (; i < 3; i ++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; id server = [NSConnection rootProxyForConnectionWithRegisteredName:@"test-server" host:nil]; [server setProtocolForProxy:@protocol(DOServer)]; NSLog(@"Created target: %@", [server createTarget]); [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; [pool drain]; } return 0; } The issue is that any remote objects created by the root proxy are not released when their proxy counterparts in the client go out of scope. According to the documentation: When an object’s remote proxy is deallocated, a message is sent back to the receiver to notify it that the local object is no longer shared over the connection. I would therefore expect that as each DOTarget goes out of scope (each time around the loop) it's remote counterpart would be dellocated, since there is no other reference to it being held on the remote side of the connection. In reality this does not happen: the temporary objects are only deallocate when the client application quits, or more accurately, when the connection is invalidated. I can force the temporary objects on the remote side to be deallocated by explicitly invalidating the NSConnection object I'm using each time around the loop and creating a new one but somehow this just feels wrong. Is this the correct behaviour from DO? Should all temporary objects live as long as the connection that created them? Are connections therefore to be treated as temporary objects which should be opened and closed with each series of requests against the server? Any insights would be appreciated.

    Read the article

  • Error in Print Function in Bubble Sort MIPS?

    - by m00nbeam360
    Sorry that this is such a long block of code, but do you see any obvious syntax errors in this? I feel like the problem is that the code isn't printing correctly since the sort and swap methods were from my textbook. Please help if you can! .data save: .word 1,2,4,2,5,6 size: .word 6 .text swap: sll $t1, $a1, 2 #shift bits by 2 add $t1, $a1, $t1 #set $t1 address to v[k] lw $t0, 0($t1) #load v[k] into t1 lw $t2, 4($t1) #load v[k+1] into t1 sw $t2, 0($t1) #swap addresses sw $t0, 4($t1) #swap addresses jr $ra #return sort: addi $sp, $sp, -20 #make enough room on the stack for five registers sw $ra, 16($sp) #save the return address on the stack sw $s3, 12($sp) #save $s3 on the stack sw $s2, 8($sp) #save Ss2 on the stack sw $s1, 4($sp) #save $s1 on the stack sw $s0, 0($sp) #save $s0 on the stack move $s2, $a0 #copy the parameter $a0 into $s2 (save $a0) move $s3, $a1 #copy the parameter $a1 into $s3 (save $a1) move $s0, $zero #start of for loop, i = 0 for1tst: slt $t0, $s0, $s3 #$t0 = 0 if $s0 S $s3 (i S n) beq $t0, $zero, exit1 #go to exit1 if $s0 S $s3 (i S n) addi $s1, $s0, -1 #j - i - 1 for2tst: slti $t0, $s1, 0 #$t0 = 1 if $s1 < 0 (j < 0) bne $t0, $zero, exit2 #$t0 = 1 if $s1 < 0 (j < 0) sll $t1, $s1, 2 #$t1 = j * 4 (shift by 2 bits) add $t2, $s2, $t1 #$t2 = v + (j*4) lw $t3, 0($t2) #$t3 = v[j] lw $t4, 4($t2) #$t4 = v[j+1] slt $t0, $t4, $t3 #$t0 = 0 if $t4 S $t3 beq $t0, $zero, exit2 #go to exit2 if $t4 S $t3 move $a0, $s2 #1st parameter of swap is v(old $a0) move $a1, $s1 #2nd parameter of swap is j jal swap #swap addi $s1, $s1, -1 j for2tst #jump to test of inner loop j print exit2: addi $s0, $s0, 1 #i = i + 1 j for1tst #jump to test of outer loop exit1: lw $s0, 0($sp) #restore $s0 from stack lw $s1, 4($sp) #resture $s1 from stack lw $s2, 8($sp) #restore $s2 from stack lw $s3, 12($sp) #restore $s3 from stack lw $ra, 16($sp) #restore $ra from stack addi $sp, $sp, 20 #restore stack pointer jr $ra #return to calling routine .data space:.asciiz " " # space to insert between numbers head: .asciiz "The sorted numbers are:\n" .text print:add $t0, $zero, $a0 # starting address of array add $t1, $zero, $a1 # initialize loop counter to array size la $a0, head # load address of print heading li $v0, 4 # specify Print String service syscall # print heading out: lw $a0, 0($t0) # load fibonacci number for syscall li $v0, 1 # specify Print Integer service syscall # print fibonacci number la $a0, space # load address of spacer for syscall li $v0, 4 # specify Print String service syscall # output string addi $t0, $t0, 4 # increment address addi $t1, $t1, -1 # decrement loop counter bgtz $t1, out # repeat if not finished jr $ra # return

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

  • Alternative to Page_Load in ASP.NET (and a good WTF story)

    - by Jason
    Woo, I have a doozy of a problem (might even be one for the Daily WTF) and I'm hoping there's a solution. (My apologies for the long post...) I have been working on a website that I inherited about a month ago. One of the parts I have been working on is fixing one of the controls (essentially a dynamic header bar) so that it displays additional information as requested by my users. As part of doing this project, I created a Site.master file so that I wouldn't have to recode the header bar into every single page. When I first started doing this, it seemingly worked very well. All the pages I had developed looked great and the bar updated as it should displaying the information as it should. Well, when I dropped the Site.master (and this control) into older site pages (ones I did not specifically develop) I noticed that it looked bad on some of them, but not all of them. When I say it looked bad, basically, the control would left-align itself to the page rather than center as it should. I spent a couple hours debugging to no avail - CSS looked correct, the HTML appeared to be okay, I didn't see anything in the Javascript (although, I did miss something as I'll point out in a second), and even the old code looked correct (to the best that it could - it's not very well written). Another coworker took a look at the site and couldn't find anything at first, either. It wasn't until I just thought to look at the rendered source code of the page (I had been working in the developer view up to this point in IE8) that it became clear what was wrong. The original developer performs searches on many of the pages. To accomplish this, he queries the database for ALL the data and then loads them into Javascript arrays within the page so he can get access to them. This in itself is a huge problem because we're talking about thousands of items, and it obviously isn't scalable (and, yes, the site is slow). However, it finally clicked what was screwing up the Site.master - when he loads the data into the Javascript arrays, he writes out the data to the HTML upon Page_Load using numerous Response.Write(string) calls. The WTF (and what was messing me up) is that he inserts the Javascript before the DOCTYPE causing IE to go into quirks mode! So, because I need to at least get this release out (I'll fix the real problem later), I was wondering: is there a way I can force this Javascript to be inserted elsewhere into the HTML—after the DOCTYPE at the very least? Right now, all the Response.Write() calls are being done in the Page_Load method. I just need them to be inserted later.

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • Copying metadata over a database link in Oracle 10g

    - by Tunde
    Thanks in advance for your help experts. I want to be able to copy over database objects from database A into database B with a procedure created on database B. I created a database link between the two and have tweaked the get_ddl function of the dbms_metadata to look like this: create or replace function GetDDL ( p_name in MetaDataPkg.t_string p_type in MetaDataPkg.t_string ) return MetaDataPkg.t_longstring is -- clob v_clob clob; -- array of long strings c_SYSPrefix constant char(4) := 'SYS_'; c_doublequote constant char(1) := '"'; v_longstrings metadatapkg.t_arraylongstring; v_schema metadatapkg.t_string; v_fullength pls_integer := 0; v_offset pls_integer := 0; v_length pls_integer := 0; begin SELECT DISTINCT OWNER INTO v_schema FROM all_objects@ENTORA where object_name = upper(p_name); -- get DDL v_clob := dbms_metadata.get_ddl(p_type, upper(p_name), upper(v_schema)); -- get CLOB length v_fullength := dbms_lob.GetLength(v_clob); for nIndex in 1..ceil(v_fullength / 32767) loop v_offset := v_length + 1; v_length := least(v_fullength - (nIndex - 1) * 32767, 32767); dbms_lob.read(v_clob, v_length, v_offset, v_longstrings(nIndex)); -- Remove table’s owner from DDL string: v_longstrings(nIndex) := replace( v_longstrings(nIndex), c_doublequote || user || c_doublequote || '.', '' ); -- Remove the following from DDL string: -- 1) "new line" characters (chr(10)) -- 2) leading and trailing spaces v_longstrings(nIndex) := ltrim(rtrim(replace(v_longstrings(nIndex), chr(10), ''))); end loop; -- close CLOB if (dbms_lob.isOpen(v_clob) > 0) then dbms_lob.close(v_clob); end if; return v_longstrings(1); end GetDDL; so as to remove the schema prefix that usually comes with metadata. I get a null value whenever I run this function over the database link with the following queries. select getddl( 'TABLE', 'TABLE1') from user_tables@ENTORA where table_name = 'TABLE1'; select getddl( 'TABLE', 'TABLE1') from dual@ENTORA; t_string is varchar2(30) t_longstring is varchar2(32767) and type t_ArrayLongString is table of t_longstring I would really appreciate it if any one could help. Many thanks.

    Read the article

  • union marshalling issue in C#

    - by senthil
    I have union inside structure and the structure looks like struct tDeviceProperty { DWORD Tag; DWORD Size; union _DP value; }; typedef union _DP { short int i; LONG l; ULONG ul; float flt; double dbl; BOOL b; double at; FILETIME ft; LPSTR lpszA; LPWSTR lpszW; LARGE_INTEGER li; struct tBinary bin; BYTE reserved[40]; } __UDP; struct tBinary { ULONG size; BYTE * bin; }; from the tBinary structure bin has to be converted to tImage (structure is given below) struct tImage { DWORD x; DWORD y; DWORD z; DWORD Resolution; DWORD type; DWORD ID; diccid_t SourceID; const void *buffer; const char *Info; const char *UserImageID; }; to use the same in c# I have done marshaling but not giving proper values when converting the pointer to structure. The C# code is follows, tBinary tBin = new tBinary(); IntPtr tBinbuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tBin)); Marshal.StructureToPtr(tBin.bin, tBinbuffer, false); tDeviceProperty tDevice = new tDeviceProperty(); tDevice.bin = tBinbuffer; IntPtr tDevicebuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tDevice)); Marshal.StructureToPtr(tDevice.bin, tDevicebuffer, false); Battary tbatt = new Battary(); tbatt.value = tDevicebuffer; IntPtr tbattbuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tbatt)); Marshal.StructureToPtr(tbatt.value, tbattbuffer, false); result = GetDeviceProperty(ref tbattbuffer); Battary v = (Battary)Marshal.PtrToStructure(tbattbuffer, typeof(Battary)); tDeviceProperty v2 = (tDeviceProperty)Marshal.PtrToStructure(tDevicebuffer, typeof(tDeviceProperty)); tBinary v3 = (tBinary)Marshal.PtrToStructure(tBinbuffer, typeof(tBinary));

    Read the article

  • Soften a colour border, maybe with a gradient, programmatically.

    - by ProfK
    I have a narrow header in corporate colour, bright red, with the content below on a just-off-white background. Ive tried softening the long line where these colours meet using border type divs with intermediate backgrounds, but I think I need the original type curved gradient 'area transitions'. I could copy the 1024px wide, and too narrow (vertically), header gif from their web site, and chop it up for eight border images, but that seems clumsy, and I'm looking for something I can apply anywhere, without needing images. I am able to do round borders in the x-y plane, but I'm curious as to how I can apply a gradient to any chosen colour transition. The extra divs I'm using as border elements above and below '#top-section' arose when I was toying with making many divs for one bordered element. This seemed the ultimate in border manipulation, sans code, but very tedious to spec in CSS and lay out a new border for each bordered element. <div id="topsection"> <div style="float: right; width: 300px; padding-right: 5px;"> <div style="font-size: small; text-align: right;"> Provantage Media Management System</div> <div style="font-size: x-small; text-align: right;"> <asp:LoginStatus ID="loginStatus" runat="server" LoginText="Log in" LogoutText="Log out" /> </div> </div> <span style="padding-left: 15px;">Main Menu</span><span id="content-title"> <asp:ContentPlaceHolder ID="titlePlaceHolder" runat="server"> </asp:ContentPlaceHolder> </span> <div style="background-color: white; height: 2px;"> </div> <div style="background-color: white; height: 3px;"></div> And the CSS: #topsection { background-color: #EB2728; color: white; height: 35px; font-size: large; font-weight: bold; }

    Read the article

  • Reading and writing C++ vector to a file

    - by JB
    For some graphics work I need to read in a large amount of data as quickly as possible and would ideally like to directly read and write the data structures to disk. Basically I have a load of 3d models in various file formats which take too long to load so I want to write them out in their "prepared" format as a cache that will load much faster on subsequent runs of the program. Is it safe to do it like this? My worries are around directly reading into the data of the vector? I've removed error checking, hard coded 4 as the size of the int and so on so that i can give a short working example, I know it's bad code, my question really is if it is safe in c++ to read a whole array of structures directly into a vector like this? I believe it to be so, but c++ has so many traps and undefined behavour when you start going low level and dealing directly with raw memory like this. I realise that number formats and sizes may change across platforms and compilers but this will only even be read and written by the same compiler program to cache data that may be needed on a later run of the same program. #include <fstream> #include <vector> using namespace std; struct Vertex { float x, y, z; }; typedef vector<Vertex> VertexList; int main() { // Create a list for testing VertexList list; Vertex v1 = {1.0f, 2.0f, 3.0f}; list.push_back(v1); Vertex v2 = {2.0f, 100.0f, 3.0f}; list.push_back(v2); Vertex v3 = {3.0f, 200.0f, 3.0f}; list.push_back(v3); Vertex v4 = {4.0f, 300.0f, 3.0f}; list.push_back(v4); // Write out a list to a disk file ofstream os ("data.dat", ios::binary); int size1 = list.size(); os.write((const char*)&size1, 4); os.write((const char*)&list[0], size1 * sizeof(Vertex)); os.close(); // Read it back in VertexList list2; ifstream is("data.dat", ios::binary); int size2; is.read((char*)&size2, 4); list2.resize(size2); // Is it safe to read a whole array of structures directly into the vector? is.read((char*)&list2[0], size2 * sizeof(Vertex)); }

    Read the article

  • Large File Download - Connection With Server Reset

    - by daveywc
    I have an asp.net website that allows the user to download largish files - 30mb to about 60mb. Sometimes the download works fine but often it fails at some varying point before the download finishes with the message saying that the connection with the server was reset. Originally I was simply using Server.TransmitFile but after reading up a bit I am now using the code posted below. I am also setting the Server.ScriptTimeout value to 3600 in the Page_Init event. private void DownloadFile(string fname, bool forceDownload) { string path = MapPath(fname); string name = Path.GetFileName(path); string ext = Path.GetExtension(path); string type = ""; // set known types based on file extension if (ext != null) { switch (ext.ToLower()) { case ".mp3": type = "audio/mpeg"; break; case ".htm": case ".html": type = "text/HTML"; break; case ".txt": type = "text/plain"; break; case ".doc": case ".rtf": type = "Application/msword"; break; } } if (forceDownload) { Response.AppendHeader("content-disposition", "attachment; filename=" + name.Replace(" ", "_")); } if (type != "") { Response.ContentType = type; } else { Response.ContentType = "application/x-msdownload"; } System.IO.Stream iStream = null; // Buffer to read 10K bytes in chunk: byte[] buffer = new Byte[10000]; // Length of the file: int length; // Total bytes to read: long dataToRead; try { // Open the file. iStream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read); // Total bytes to read: dataToRead = iStream.Length; //Response.ContentType = "application/octet-stream"; //Response.AddHeader("Content-Disposition", "attachment; filename=" + filename); // Read the bytes. while (dataToRead > 0) { // Verify that the client is connected. if (Response.IsClientConnected) { // Read the data in buffer. length = iStream.Read(buffer, 0, 10000); // Write the data to the current output stream. Response.OutputStream.Write(buffer, 0, length); // Flush the data to the HTML output. Response.Flush(); buffer = new Byte[10000]; dataToRead = dataToRead - length; } else { //prevent infinite loop if user disconnects dataToRead = -1; } } } catch (Exception ex) { // Trap the error, if any. Response.Write("Error : " + ex.Message); } finally { if (iStream != null) { //Close the file. iStream.Close(); } Response.Close(); } }

    Read the article

  • Hibernate updating records and implementing listeners : getting only required attribute values for event.getOldState()

    - by Narendra
    Hi All, I am using Hibernate 3 as my persistence framework. Below is the sample hbm file I am using. <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.test.User" table="user"> <meta attribute="implements">com.test.dao.interfaces.IEntity</meta> <id name="key" type="long" column="user_key"> <generator class="increment" /> </id> <property name="userName" column="user_name" not-null="true" type="string" /> <property name="password" column="password" not-null="true" type="string" /> <property name="firstName" column="first_name" not-null="true" type="string" /> <property name="lastName" column="last_name" not-null="true" type="string" /> <property name="createdDate" column="created_date" not-null="true" type="timestamp" insert="false" update="false" /> <property name="createdBy" column="created_by" not-null="true" type="string" update="false" /> </class> </hibernate-mapping> I am added a post-update listener. What it will do is if there any updations perfomed on User then it will be invoked and cahnges will be inserted to audit table. Below is the sample implementation for postupdate event. public void onPostUpdate(PostUpdateEvent event) { LogHelper.info(logger, "Begin - onPostUpdate " + event.getEntity().getClass().getSimpleName()); if (!this.checkForAudit(event.getEntity().getClass().getSimpleName())) { // check do we need to audit it. } // Get Attribute Names String[] attrNames = event.getPersister().getEntityMetamodel() .getPropertyNames(); Object[] oldobjectValue = c Object[] newObjectValue = event.getState(); this.auditDetailsEvent(attrNames, oldobjectValue, newObjectValue); LogHelper.info(logger, "End - onPostUpdate"); // return false; } Here is my requirement. event.getPersister().getEntityMetamodel() .getPropertyNames(); or event.getOldState(); or event.getState(); must return attribute names or value which i can update or insert. Is there any way to control the return values of above one's. Pleas help me on this regard. Thanks, Narendra

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >