Search Results

Search found 4426 results on 178 pages for 'bunch'.

Page 132/178 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • WCF RIA Silverlight deployment issues

    - by Handleman
    It seems the world is awash with people having problems deploying RIA WCF services, and now I'm one too. I've already tried a bunch of things, but to no avail. I need WCF RIA to support a Silverlight 3 application I've built. The short story is, using the new WCF RIA services (Nov 09?) I open VS 2008, create new project (silverlight application), enabling ".NET RIA services". Add new item to web project - Linq2SQL dbml file (from SQL 2005 DB prepared earlier) and compile. I add a new item to the web project - domain service (link the tables I need) and compiled. Using the domain context I "Load" data with a standard RIA get query in the MainPage and add a TextBlock to display returned data. Build & run (cassini) - success. Using VS to publish to IIS on local PC - success. Using VS to publish to test server (IIS6) - browse to location and the Silverlight app loads but Fiddler tells me I've got a 404 on all the the WCF .svc requests. Use Fiddler to "launch IE" on the service request and it's true - 404. I have already run aspnet_regiis, ServiceModelReg and added mime types for .xap, .xaml, .xbap and .svc. I have included the System.Web.Ria and System.Web.DomainServices DLL with copy local true. I need help with either a) a solution b) an approach to find a solution

    Read the article

  • ws-xmlrpc claims error on part of service but other clients work fine

    - by mludd
    I've been trying to connect to an rTorrent instance using ws-xmlrpc and it just isn't going too well. Now, the URL I'm using is the same that I've been using when making sure that rTorrent's XMLRPC support is fine (which it appears to be since both a native OS X application and a small python script I threw together appear to be able to talk to it just fine without any errors). However, when I try using ws-xmlrpc to connect I get org.apache.xmlrpc.XmlRpcException: Failed to create input stream: Unexpected end of file from serverat the top of my stack trace followed by a bunch of steps down to: java.net.SocketException: Unexpected end of file from server at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769) ... So basically, it seems that ws-xmlrpc is convinced that the reply from rTorrent is malformed somehow but other libraries apparently have no problem with it. The code I use to call rTorrent is: private Object callRTorrent(String command, Object[] params) { Object result = null; try { // xmlrpcclient is an XmlRpcClient object and is instantied in // the class constructor result = xmlrpcclient.execute(command, params); } catch(XmlRpcException xre) { System.out.println("Unable to execute method "+command); xre.printStackTrace(); } return result; } With command set to system.listMethodsand params set to an empty Object[]. From reading documentation and googling my conclusion is that I'm not doing anything obviously wrong and this problem doesn't appear to be common, so does anyone have a clue what's going on here?

    Read the article

  • Creating an MJPEG Viewer Iphone

    - by Tony
    Hey all, Im trying to make a MJPEG viewer in Objective C but I'm having a bunch of issues with it. First off, Im using AsyncSocket(http://code.google.com/p/cocoaasyncsocket/) which lets me connect to the host. Here's what I got so far NSLog(@"Ready"); asyncSocket = [[AsyncSocket alloc] initWithDelegate:self]; //http://kamera5.vfp.slu.se/axis-cgi/mjpg/video.cgi NSError *err = nil; if(![asyncSocket connectToHost:@"kamera5.vfp.slu.se" onPort:80 error:&err]) { NSLog(@"Error: %@", err); } then in the didConnectToHost method: - (void)onSocket:(AsyncSocket *)sock didConnectToHost:(NSString *)host port:(UInt16)port{ NSLog(@"Accepted client %@:%hu", host, port); NSString *urlString = [NSString stringWithFormat:@"http://kamera5.vfp.slu.se/axis-cgi/mjpg/video.cgi"]; NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"GET"]; //set headers NSString *_host = [NSString stringWithFormat:host]; [request addValue:_host forHTTPHeaderField: @"Host"]; NSString *KeepAlive = [NSString stringWithFormat:@"300"]; [request addValue:KeepAlive forHTTPHeaderField: @"Keep-Alive"]; NSString *connection = [NSString stringWithFormat:@"keep-alive"]; [request addValue:connection forHTTPHeaderField: @"Connection"]; //get response NSHTTPURLResponse* urlResponse = nil; NSError *error = [[NSError alloc] init]; NSData *responseData = [NSURLConnection sendSynchronousRequest:request returningResponse:&urlResponse error:&error]; NSString *result = [[NSString alloc] initWithData:responseData encoding:NSUTF8StringEncoding]; NSLog(@"Response Code: %d", [urlResponse statusCode]); if ([urlResponse statusCode] >= 200 && [urlResponse statusCode] < 300) { NSLog(@"Response: %@", result); //here you get the response } } This calls the MJPEG stream, but it doesn't call it to get more data. What I think its doing is just loading the first chunk of data, then disconnecting. Am I doing this totally wrong or is there light at the end of this tunnel? Thanks!

    Read the article

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • Universal Authentication to Google Data API?

    - by viatropos
    Hey, I want to be able to have say 10 admin users store all their documents on google docs for a domain ('http://docs.google.com/a/domain.com'), and have everyone else be able to view them through 'domain.com/documents'. I'm just not certain how the whole authentication thing works in that case. Should I use OAuth? Or could I just use ClientLogin for say the root/global admin, and anytime someone goes to the site, they login as that? That works for personal docs, but it doesn't seem to be working for Google Apps. I would like it so the user has no idea they're accessing google docs, so I don't want them to have to say "Yes, Authenticate this App with Google", as seen in this Doclist Manager App. The app is basically: Admin stores a bunch of forms and documents User uses form and views documents the admin has posted ... so there's no need to access the user's Google Docs. But it seems like AuthSub and OAuth are addressing that instead... Thanks for the tips.

    Read the article

  • Flash vs. Ajax Abilities

    - by Alex
    Hey everyone. I want to develop an application that does a bunch of cool stuff. The first thing that I need in it is to get information about the page a person is browsing. With that said, I need for example to know how long a user stayed in a page and where was the scrollbar. While getting that data, It's all saved to a database. The thing is, I prefer doing that in Flash [although I have no experience in it] over Ajax since I want to hide the code - which as far as I know not possible in Javascript/Ajax. So, can I do all that in Flash? - Read the content of the page, get the status of the scroll bar.. Plus, I then need to go threw the gathered information that is saved in the database. Since there could be many calculations i thought C++ .Net is better than PHP [which I know better]. Is that all possible or am I just crazy? :) Thanks ahead.

    Read the article

  • Binding a member signal to a function

    - by the_drow
    This line of code compiles correctly without a problem: boost::bind(boost::ref(connected_), boost::dynamic_pointer_cast<session<version> >(shared_from_this()), boost::asio::placeholders::error); However when assigning it to a boost::function or as a callback like this: socket_->async_connect(connection_->remote_endpoint(), boost::bind(boost::ref(connected_), boost::dynamic_pointer_cast<session<version> >(shared_from_this()), boost::asio::placeholders::error)); I'm getting a whole bunch of incomprehensible errors (linked since it's too long to fit here). On the other hand I have succeeded binding a free signal to a boost::function like this: void print(const boost::system::error_code& error) { cout << "session connected"; } int main() { boost::signal<void(const boost::system::error_code &)> connected_; connected_.connect(boost::bind(&print, boost::asio::placeholders::error)); client<>::connection_t::socket_ptr socket_(new client<>::connection_t::socket_t(conn->service())); // shared_ptr of a tcp socket socket_->async_connect(conn->remote_endpoint(), boost::bind(boost::ref(connected_), boost::asio::placeholders::error)); conn->service().run(); // io_service.run() return 0; } This works and prints session connected correctly. What am I doing wrong here?

    Read the article

  • How to use class_eval <<-"end_eval" in Ruby? Not parsing correctly

    - by viatropos
    I would like to define dynamic methods based on some options people give when instantiating it. So in their AR model, they'd do something like this: acts_as_something :class_name => "CustomClass" I'm trying to implement that like so: module MyModule def self.included(base) as = Config.class_name.underscore foreign_key = "#{as}_id" # 1 - class eval, throws these errors # ~/test-project/helpers/form.rb:45: syntax error, unexpected $undefined # @ ||= MyForm.new( # ^ # ~/test-project/helpers/form.rb:46: syntax error, unexpected ',' #~/test-project/helpers/form.rb:48: syntax error, unexpected ')', # expecting kEND from ~/test-project/helpers.rb:12:in `include' base.class_eval <<-"end_eval", __FILE__, __LINE__ attr_accessor :#{as} def #{as} @#{as} ||= MyForm.new( :id => self.#{foreign_key}, :title => self.title ) @#{as} end end_eval end end But it's throwing a bunch of errors I've printed in the comments. Am I using this incorrectly? What are some better ways I can define dynamic method names and dynamic names inside the method like this? I see people use this often instead of define_method (see these classes in resource_controller and couchrest toward the bottom). What I missing here? Thanks for the help

    Read the article

  • UX question: is better to have "serious delete" or have "trash"

    - by ftrotter
    I am developing an application that allows for a user to manage some individual data points. One of the things that my users will want to do is "delete" but what should that mean? For a web application is it better to present a user with the option to have serious delete or to use a "trash" system? Under "serious delete" (would love to know if there is a better name for this...) you click "delete" and then the user is warned "this is final and tragic action. Once you do this you will not be able to get -insert data point name here- back, even if you are crying..." Then if they click delete... well it truly is gone forever. Under the "trash" model, you never trust that the user really wants to delete... instead you remove the data point from the "main display" and put into a bucket called "the trash". This gets it out of the users way, which is what they usually want, but they can get it back if they make a mistake. Obviously this is the way most operating systems have gone. The advantages of "serious delete" are: Easy to implement Easy to explain to users The disadvantages of "serious delete" are: it can be tragically final sometimes, cats walk on keyboards The advantages of the "trash" system are: user is safe from themselves bulk methods like "delete a bunch at once" make more sense saves support headaches The disadvantages of the "trash" system are": For sensitive data, you create an illusion of destruction users think something is gone, but it is not. Lots of subtle distinctions make implementation more difficult Do you "eventually" delete the contents of the trash? My question is which one is the right design pattern for modern web applications? With enough discussion to justify your answer... Would love to be pointed towards some relevant research. -FT

    Read the article

  • Call private methods and private properties from outside a class in PHP

    - by Pablo López Torres
    I want to access private methods and variables from outside the classes in very rare specific cases. I've seen that this is not be possible although introspection is used. The specific case is the next one: I would like to have something like this: class Console { final public static function run() { while (TRUE != FALSE) { echo "\n> "; $command = trim(fgets(STDIN)); switch ($command) { case 'exit': case 'q': case 'quit': echo "OK+\n"; return; default: ob_start(); eval($command); $out = ob_get_contents(); ob_end_clean(); print("Command: $command"); print("Output:\n$out"); break; } } } } This method should be able to be injected in the code like this: Class Demo { private $a; final public function myMethod() { // some code Console::run(); // some other code } final public function myPublicMethod() { return "I can run through eval()"; } private function myPrivateMethod() { return "I cannot run through eval()"; } } (this is just one simplification. the real one goes through a socket, and implement a bunch of more things...) So... If you instantiate the class Demo and you call $demo-myMethod(), you'll get a console: that console can access the first method writing a command like: > $this->myPublicMethod(); But you cannot run successfully the second one: > $this->myPrivateMethod(); Do any of you have any idea, or if there is any library for PHP that allows you to do this? Thanks a lot!

    Read the article

  • Preg Expression to identify classes/ids in a CSS file that have no contents

    - by dclowd9901
    I'm in the process of updating some old CSS files in our systems, and we have a bunch that have lots of empty classes simply taking up space in the file. I'd love to learn how to write Regular expressions, but I just don't get them. I'm hoping the more I expose myself to them (with a little more cohesive explanation), the more I'll end up understanding them. The Problem That said, I'm looking for an expression that will identify text followed by a '{' (some have spaces in between, and some do not) and if there are no letters or numbers between that bracket and '}' (spaces don't count), it will be identified as a matching string. I suppose I can trim the whitespace out of the doc before I run a regular expression through it, but I don't want to change the basic structure of the text. I'm hoping to return it into a large <textarea>. Bonus points for explaining the characters and their meanings, and also an expression identifying lines in the copy without any text or numbers, as well. I will likely use the final expression in PHP script. tl;dr: Regular Expression to match: .a_class_or #an_id { /* if there aren't any alphanumerics in here, this should be a matching line of text */ }

    Read the article

  • emacs: Can I set compilation-error-regexp-alist in a mode hook fn?

    - by Cheeso
    I am trying to set the compilation-error-regexp-alist in a function that I add as a mode hook. (defun cheeso-javascript-mode-fn () (turn-on-font-lock) ...bunch of other stuff ;; for JSLINT (make-local-variable 'compilation-error-regexp-alist) (setq compilation-error-regexp-alist '( ("^[ \t]*\\([A-Za-z.0-9_: \\-]+\\)(\\([0-9]+\\)[,]\\( *[0-9]+\\))\\( Microsoft JScript runtime error\\| JSLINT\\): \\(.+\\)$" 1 2 3) )) ;;(make-local-variable 'compile-command) (setq compile-command (let ((file (file-name-nondirectory buffer-file-name))) (concat "%windir%\\system32\\cscript.exe \\cheeso\\bin\\jslint.js " file))) ) (add-hook 'javascript-mode-hook 'cheeso-javascript-mode-fn) The mode hook runs. The various things I Set in the mode hook work. The compile-command gets set. But for some reason, the compilation-error-regexp-alist value doesn't take effect. If I later do a M-x describe-variable on compilation-error-regexp-alist, it shows me the value I think it should have. But .. the errors in the compilation buffer don't get highlighted, and M-x next-error does not work. If I add the error regexp value to the compilation-error-regexp-alist via setq-default, like this: (setq-default compilation-error-regexp-alist '( ... jslint regexp here ... ... many other regexp's here... )) ...then it works. The errors in the compilation buffer get properly highlighted and M-x next-error functions as expected.

    Read the article

  • Question about 'git branching'

    - by michael
    Hi, I read this about git branch: http://book.git-scm.com/3_basic_branching_and_merging.html so I follow it and create 1 branch : experimental And I 1. switch to experimental branch (git checkout experimental) 2. make a bunch of changes 3. commit it (git commit -a) 4. switch to master branch (git checkout master) 5. make some changes and commit there 6. switch back to experimental (git checkout experimental) 7. merge master change to experimental (git merge master) 8. there are some conflicts but after I resolve them, I did 'git add myfile' And now i am stuck, I can't move back to master when I do $ git checkout master error: Entry 'res/layout/my_item.xml' would be overwritten by merge. Cannot merge. and I did: $ git rebase --abort No rebase in progress? and I did : $ git add res/layout/socialhub_list_item.xml $ git checkout master error: Entry 'res/layout/my_item.xml' would be overwritten by merge. Cannot merge. What can I do so that I can go back to my master branch? Thank you.

    Read the article

  • Extra fulltext ordering criteria beyond default relevance

    - by Jeremy Warne
    I'm implementing an ingredient text search, for adding ingredients to a recipe. I've currently got a full text index on the ingredient name, which is stored in a single text field, like so: "Sauce, tomato, lite, Heinz" I've found that because there are a lot of ingredients with very similar names in the database, simply sorting by relevance doesn't work that well a lot of the time. So, I've found myself sorting by a bunch of my own rules of thumb, which probably duplicates a lot of the full-text search algorithm which spits out a numerical relevance. For instance (abridged): ORDER BY [ingredient name is exactly search term], [ingredient name starts with search term], [ingredient name starts with any word from the search and contains all search terms in some order], [ingredient name contains all search terms in some order], ...and so on. Each of these is defined in the SELECT specification as an expression returning either 1 or 0, and so I order by those in sequential order. I would love to hear suggestions for: A better way to define complicated order-by criteria in one place, say perhaps in a view or stored procedure that you can pass just the search term to and get back a set of results without having to worry about how they're ordered? A better tool for this than MySQL's fulltext engine -- perhaps if I was using Sphinx or something [which I've heard of but not used before], would I find some sort of complicated config option designed to solve problems like this? Some google search terms which might turn up discussion on how to order text items within a specific domain like this? I haven't found much that's of use. Thanks for reading!

    Read the article

  • Created nested model setting a property on nested model before save

    - by CWitty
    I have two models a Company and a User the Company has_many :users and the User belongs_to :company. I have a form such as: <%= form_for @company, data: {toggle: :validator}, novalidate: "novalidate", html: {role: :form} do |f| %> company fields Then in there I have <%= f.fields_for :users, @company.users.build do |user_form| %> A bunch of user fields It posts the data with the nested attributes of users_attributes: {"0" => {name: "Chad"}} But it doesn't create the user only the company object. Company Model class Company < ActiveRecord::Base has_many :users, dependent: :destroy has_many :contacts, dependent: :destroy accepts_nested_attributes_for :users accepts_nested_attributes_for :contacts attr_accessor :card_token, :users_attributes before_create :create_company_customer_token before_create :create_admin_user before_destroy :set_deleted_flag validates_presence_of :name, :phone_number private def create_admin_user self.users.first.admin = true end def set_deleted_flag self.deleted = true save users.each do |u| u.destroy end false end def create_company_customer_token begin customer = Stripe::Customer.create(description: "Company: #{self.name}", card: self.card_token, plan: self.plan) self.stripe_customer_id = customer['id'] rescue Stripe::StripeError => e self.errors.add(:stripe_customer_id, "Looks like we are having an issue at the moment, please try again shortly") @logger ||= Rails.logger @logger.error(e) end end end User Model class User < ActiveRecord::Base include Clearance::User has_many :messages belongs_to :company before_destroy :set_deleted_flag after_create :send_welcome_email validates_presence_of :first_name, :last_name validates_uniqueness_of :email, scope: :company_id, conditions: -> { where.not(deleted: true) } def name "#{first_name} #{last_name}" end private def set_deleted_flag self.deleted = true save end def send_welcome_email UserMailer.welcome_email(self).deliver end end

    Read the article

  • cache_money only writing to memcached on creates and updates, and seemingly never looking in the cac

    - by Shane Liebling
    I seem to be having some extremely odd cache_money interactions. When I am on the console, and I create a new instance of a class and save it I see the cache misses and cache stores on my memcached console output. Then when the create finishes I see a bunch of cache deletions. If I then try to do any kind of find for the newly created object (or any other objects for that matter) I never see any cache access. This is highly confusing. I could kind of understand if all finds never hit the cache (though that in and of itself would be an issue requiring investigation), but finds do seem to hit the cache when the object is being created (checking for associations and such). Anyone have this experience in the past at all? Any thoughts? AFAIK there isn't really much in the way of configuration options for cache_money, and it certainly doesn't seem like there are any that would be on by default and be creating these kinds of symptoms. My cache_money config is basically straight out of the docs. Any help would be greatly appreciated.

    Read the article

  • Source of parsers for programming languages?

    - by Arkaaito
    I'm dusting off an old project of mine which calculates a number of simple metrics about large software projects. One of the metrics is the length of files/classes/methods. Currently my code "guesses" where class/method boundaries are based on a very crude algorithm (traverse the file, maintaining a "current depth" and adjusting it whenever you encounter unquoted brackets; when you return to the level a class or method began on, consider it exited). However, there are many problems with this procedure, and a "simple" way of detecting when your depth has changed is not always effective. To make this give accurate results, I need to use the canonical way (in each language) of detecting function definitions, class definitions and depth changes. This amounts to writing a simple parser to generate parse trees containing at least these elements for every language I want my project to be applicable to. Obviously parsers have been written for all these languages before, so it seems like I shouldn't have to duplicate that effort (even though writing parsers is fun). Is there some open-source project which collects ready-to-use parser libraries for a bunch of source languages? Or should I just be using ANTLR to make my own from scratch? (Note: I'd be delighted to port the project to another language to make use of a great existing resource, so if you know of one, it doesn't matter what language it's written in.)

    Read the article

  • GHC 6.12 and MacPorts

    - by absz
    I recently installed GHC 6.12 and the Haskell Platform 2010.1.0.1 on my Intel MacBook running OS X 10.5.8, and initially, everything worked fine. However, I discovered that if I use cabal install to install a package which depends on a MacPorts library (e.g., cabal install --extra-lib-dirs=/opt/local/lib --extra-include-dirs=/opt/local/include gd), things work fine in GHCi, but if I try to compile, I get the error Linking test ... Undefined symbols: "_iconv_close", referenced from: _hs_iconv_close in libHSbase-4.2.0.0.a(iconv.o) "_iconv", referenced from: _hs_iconv in libHSbase-4.2.0.0.a(iconv.o) "_iconv_open", referenced from: _hs_iconv_open in libHSbase-4.2.0.0.a(iconv.o) ld: symbol(s) not found collect2: ld returned 1 exit status After some Googling, I found a long Haskell-cafe thread discussing this problem. The upshot seems to be that MacPorts installs an updated version of libiconv, and the binary interface is slightly different from the version included with the system. Consequently, if you try to link with any MacPorts library, the MacPorts libiconv gets linked in too; and since the base library was built to link against a different version of libiconv, things break. I've tried setting LD_LIBRARY_PATH and DYLD_LIBRARY_PATH and adding more flags to try to get it to look at /usr/lib again (e.g. cabal install --extra-lib-dirs=/opt/local/lib --extra-include-dirs=/opt/local/include --extra-lib-dirs=/usr/lib --extra-include-dirs=/usr/include gd), but neither worked. Uninstalling the MacPorts libiconv isn't really an option, since I have a bunch of ports installed which depend on it---including some ports I want Haskell to link to, like gd2. From what I've seen online, the upshot really seems to be "you're boned": you cannot link against any MacPorts library while compiling with GHC, and there doesn't seem to be a solution. However, that thread was from the end of 2009, so I figure there's a chance that someone has a solution, workaround, ridiculous hack… anything, really. So: does anybody know how to get GHC 6.12 to link against the system libiconv at the same time as it links to libraries from MacPorts? Or, failing that, a way to make linking not break in some other clever way?

    Read the article

  • Resize transparent images using C#

    - by MartinHN
    Does anyone have the secret formula to resizing transparent images (mainly GIFs) without ANY quality loss - what so ever? I've tried a bunch of stuff, the closest I get is not good enough. Take a look at my main image: http://www.thewallcompany.dk/test/main.gif And then the scaled image: http://www.thewallcompany.dk/test/ScaledImage.gif //Internal resize for indexed colored images void IndexedRezise(int xSize, int ySize) { BitmapData sourceData; BitmapData targetData; AdjustSizes(ref xSize, ref ySize); scaledBitmap = new Bitmap(xSize, ySize, bitmap.PixelFormat); scaledBitmap.Palette = bitmap.Palette; sourceData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, bitmap.PixelFormat); try { targetData = scaledBitmap.LockBits(new Rectangle(0, 0, xSize, ySize), ImageLockMode.WriteOnly, scaledBitmap.PixelFormat); try { xFactor = (Double)bitmap.Width / (Double)scaledBitmap.Width; yFactor = (Double)bitmap.Height / (Double)scaledBitmap.Height; sourceStride = sourceData.Stride; sourceScan0 = sourceData.Scan0; int targetStride = targetData.Stride; System.IntPtr targetScan0 = targetData.Scan0; unsafe { byte* p = (byte*)(void*)targetScan0; int nOffset = targetStride - scaledBitmap.Width; int nWidth = scaledBitmap.Width; for (int y = 0; y < scaledBitmap.Height; ++y) { for (int x = 0; x < nWidth; ++x) { p[0] = GetSourceByteAt(x, y); ++p; } p += nOffset; } } } finally { scaledBitmap.UnlockBits(targetData); } } finally { bitmap.UnlockBits(sourceData); } } I'm using the above code, to do the indexed resizing. Does anyone have improvement ideas?

    Read the article

  • Need help with Javascript....I think

    - by Mikey
    I'm trying to bypass going through a bunch of menus, to get to the data I want directly. Here are the links I want to go to: http://factfinder.census.gov/servlet/MapItDrawServlet?geo_id=14000US53053072904&tree_id=4001&context=dt&_lang=en&_ts=288363511701 factfinder.census.gov/servlet/MapItDrawServlet?geo_id=14000US53025981400&tree_id=4001&context=dt&_lang=en&_ts=288363511701 factfinder.census.gov/servlet/MapItDrawServlet?geo_id=14000US53067011620&tree_id=4001&context=dt&_lang=en&_ts=288363511701 Notice, if you pull that up right now, you simply see a GIF outline of the map, however there is no map data "behind" it. However, if you go to: factfinder.census.gov/servlet/DTGeoSearchByListServlet?ds_name=DEC_2000_SF1_U&_lang=en&_ts=288392632118 Select Geographic Type: ..... ..... Census Tract Select a State: Washington Select a County: Pierce Select one or more geographic areas: Census Tract 729.04 Hit "Map It" The map will load perfectly. Also, until you close your browser, any of the other links will work perfectly. What I want to do, is be able to bypass these 5 steps, but obviously something is preventing this. Is there a feasible workaround? I have my own domain that I would be able to upload new Javascript or HTML files or whatever is needed.

    Read the article

  • Refactoring Bloated ViewModel

    - by Holy Christ
    Hi, I am writing a PRISM/MVVM/WPF application. It's a LOB application, so there are a lot of complicated rules. I've noticed the View Model is starting to get bloated. There are two main issues. One is that to maintain MVVM, I'm doing a lot of things that feel hacky like adding a bunch of properties to my VM. The view binds to those properties to keep track of what feels like view specific information. For example, a boolean keeping track of the status of a long running process in the VM, so the view can disable some of its controls while the long running process is working. I've read that this issue could be solved with Attached Behaviors. I'll look more into that. In the example MVVM apps you see online, this isn't a big deal because they are over-simplified. The other issue is the number of commands in my VM. Right now there are four commands. I'm defining the commands in the VM using Josh Smith's RelayCommand (basically the DelegateCommand in PRISM) so all the business logic lives in the VM. I considered moving each command into separate unit of works. I'm not sure the best way to do this. Which patterns are you guys using to keep your VMs clean? I can already feel someone responding with "your view and VM is too complicated, you should break them into many view/VMs". It is certainly not too complicated from a Ux perspective - there are 2 buttons, a combobox, and a listbox. Also, from a logical perspective, it is one cohesive domain. Having said that, I'm very interested in hearing how others are dealing with this type of issue. Thanks for your input.

    Read the article

  • MKAnnotationView is not added for some annotations

    - by Dave
    I find that MKMapView has to be re-centered to a significant distance away from an existing map center to be able to add more annotations to the map and DISPLAY them. Is this a bug in SDK 3.2? I have looked literally EVERYWHERE and can't find a way to refresh the map view without having to zoom in or zoom out or move the map view. Why?? I don't mind if the solution involves doing a bunch of custom delegates and notifications. I just NEED this so bad. Basically, I am adding pins (annotations) to a map view when performing a search for businesses. I remove existing annotations on every search. So the annotations are added to the map on every search. If I search twice with the same map view, I see the results loading but the map view is empty. The pins will appear if i move/resize/zoom in. Anybody seen this? any help is appreciated.

    Read the article

  • JSP or .ascx equivalent for Scala?

    - by Daniel Worthington
    I'm working on a small MVC "framework" (it's really very small) in Scala. I'd like to be able to write my view files as Scala code so I can get lots of help from the compiler. Pre-compiling is great, but what I really want is a way to have the servlet container automatically compile certain files (my view files) on request so I don't have to shut down Jetty and compile all my source files at once, then start it up again just to see small changes to my HTML. I do this a lot with .ascx files in .NET (the file will contain just one scriptlet tag with a bunch of C# code inside which writes out markup using an XmlWriter) and I love this workflow. You just make changes and then refresh your browser, but it's still getting compiled! I don't have a lot of experience with Java, but it seems possible to do this with JSP as well. I'm wondering if this sort of thing is possible in Scala. I have looked into building this myself (see more info here: http://www.nabble.com/Compiler-API-td12050645.html) but I would rather use something else if it's out there.

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • How do you determine subtype of an entity using Inheritance with Entity Framework 4?

    - by KallDrexx
    I am just starting to use the Entity Framework 4 for the first time ever. So far I am liking it but I am a bit confused on how to correctly do inheritance. I am doing a model-first approach, and I have my Person entity with two subtype entities, Employee and Client. EF is correctly using the table per type approach, however I can't seem to figure out how to determine what type of a Person a specific object is. For example, if I do something like var people = from p in entities.Person select p; return people.ToList<Person>(); In my list that I form from this, all I care about is the Id field so i don't want to actually query all the subtype tables (this is a webpage list with links, so all I need is the name and the Id, all in the Persons table). However, I want to form different lists using this one query, one for each type of person (so one list for Clients and another for Employees). The issue is if I have a Person entity, I can't see any way to determine if that entity is a Client or an Employee without querying the Client or Employee tables directly. How can I easily determine the subtype of an entity without performing a bunch of additional database queries?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >