Search Results

Search found 1962 results on 79 pages for 'slightly offtopic'.

Page 57/79 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Use a vector to index a matrix without linear index

    - by David_G
    G'day, I'm trying to find a way to use a vector of [x,y] points to index from a large matrix in MATLAB. Usually, I would convert the subscript points to the linear index of the matrix.(for eg. Use a vector as an index to a matrix in MATLab) However, the matrix is 4-dimensional, and I want to take all of the elements of the 3rd and 4th dimensions that have the same 1st and 2nd dimension. Let me hopefully demonstrate with an example: Matrix = nan(4,4,2,2); % where the dimensions are (x,y,depth,time) Matrix(1,2,:,:) = 999; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(3,4,:,:) = 888; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(4,4,:,:) = 124; Now, I want to be able to index with the subscripts (1,2) and (3,4), etc and return not only the 999 and 888 which exist in Matrix(:,:,1,1) but the contents which exist at Matrix(:,:,1,2),Matrix(:,:,2,1) and Matrix(:,:,2,2), and so on (IRL, the dimensions of Matrix might be more like size(Matrix) = (300 250 30 200) I don't want to use linear indices because I would like the results to be in a similar vector fashion. For example, I would like a result which is something like: ans(time=1) 999 888 124 999 888 124 ans(time=2) etc etc etc etc etc etc I'd also like to add that due to the size of the matrix I'm dealing with, speed is an issue here - thus why I'd like to use subscript indices to index to the data. I should also mention that (unlike this question: Accessing values using subscripts without using sub2ind) since I want all the information stored in the extra dimensions, 3 and 4, of the i and jth indices, I don't think that a slightly faster version of sub2ind still would not cut it..

    Read the article

  • Ninject problem binding to constant value in MVC3 with a RavenDB session

    - by Jim
    I've seen a lot of different ways of configuring Ninject with ASP.NET MVC, but the implementation seems to change slightly with each release of the MVC framework. I'm trying to inject a RavenDB session into my repository. Here is what I have that's almost working. public class MvcApplication : NinjectHttpApplication { ... protected override void OnApplicationStarted() { base.OnApplicationStarted(); AreaRegistration.RegisterAllAreas(); RegisterGlobalFilters(GlobalFilters.Filters); RegisterRoutes(RouteTable.Routes); } protected override IKernel CreateKernel() { return new StandardKernel(new MyNinjectModule()); } public static IDocumentSession CurrentSession { get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } } ... } public class MyNinjectModule : NinjectModule { public override void Load() { Bind<IUserRepository>().To<UserRepository>(); Bind<IDocumentSession>().ToConstant(MvcApplication.CurrentSession); } } When it tries to resolve IDocumentSession, I get the following error. Error activating IDocumentSession using binding from IDocumentSession to constant value Provider returned null. Activation path: 3) Injection of dependency IDocumentSession into parameter documentSession of constructor of type UserRepository Any ideas on how to make the IDocumentSession resolve?

    Read the article

  • Directly call distutils' or setuptools' setup() function with command name/options, without parsing

    - by Ryan B. Lynch
    I'd like to call Python's distutils' or setuptools' setup() function in a slightly unconventional way, but I'm not sure whether distutils is meant for this kind of usage. As an example, let's say I currently have a 'setup.py' file, which looks like this (lifted verbatim from the distutils docs--the setuptools usage is almost identical): from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='[email protected]', url='http://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) Normally, to build just the .spec file for an RPM of this module, I could run python setup.py bdist_rpm --spec-only, which parses the command line and calls the 'bdist_rpm' code to handle the RPM-specific stuff. The .spec file ends up in './dist'. How can I change my setup() invocation so that it runs the 'bdist_rpm' command with the '--spec-only' option, WITHOUT parsing command-line parameters? Can I pass the command name and options as parameters to setup()? Or can I manually construct a command line, and pass that as a parameter, instead? NOTE: I already know that I could call the script in a separate process, with an actual command line, using os.system() or the subprocess module or something similar. I'm trying to avoid using any kind of external command invocations. I'm looking specifically for a solution that runs setup() in the current interpreter. For background, I'm converting some release-management shell scripts into a single Python program. One of the tasks is running 'setup.py' to generate a .spec file for further pre-release testing. Running 'setup.py' as an external command, with its own command line options, seems like an awkward method, and it complicates the rest of the program. I feel like there may be a more Pythonic way.

    Read the article

  • How/when to hire new programmers, and how to integrate them?

    - by Shaul
    Hiring new programmers, especially in a small company, can often present a Catch-22 situation. We have too much work to do, so we need to hire new programmers. But we can't hire new programmers now, because they will need mentoring and several months of learning curve in your industry/product/environment before they're useful, and none of the programmers has time to be a mentor to a new programmer, because they're all completely swamped with the current work load. That may be a slightly frivolous way of describing the situation, but nevertheless, it's difficult for a small company on a tight budget to justify hiring someone who is not only going to be unproductive for a long time, but will also take away from the performance of the current programmers. How have you dealt with this kind of situation? When is the best time to hire someone? What are the best tasks to assign to a new team member so that they can learn their way around your code base and start getting their hands dirty as quickly as possible? How do you get the new guy useful without bogging your existing programmers down in too much mentoring? Any comments & suggestions you have are much appreciated!

    Read the article

  • Using MongoDB's map/reduce to "group by" two fields

    - by ibz
    I need something slightly more complex than the examples in the MongoDB docs and I can't seem to be able to wrap my head around it. Say I have a collection of objects of the form {date: "2010-10-10", type: "EVENT_TYPE_1", user_id: 123, ...} Now I want to get something similar to a SQL GROUP BY query, grouping over both date and type. That is, I want the number of events of each type in each day. Also, I'd like to make it unique by user_id, ie. if a user has more events in the same day, count it only once. I'm trying to do this with map/reduce. I do db.logs.mapReduce(function() { emit(this.type, 1); }, function(k, vals) { var total = 0; for (var i = 0; i < vals.length; i++) total += vals[i]; return total; }}) which nicely groups by type, but now, how can I group by date at the same time? Seems the key in emit() can't be an array (I thought about doing emit([this.date, this.type], 1)). Also, how can I ensure the per-user uniqueness? I'm just starting with MongoDB and I'm still having trouble grasping the basic concepts. Also, there is not much documentation available out there. Any help from more experienced users is appreciated. Thanks!

    Read the article

  • Python 3.0 IDE - Komodo and Eclipse both flaky?

    - by victorhooi
    heya, I'm trying to find a decent IDE that supports Python 3.x, and offers code completion/in-built Pydocs viewer, Mercurial integration, and SSH/SFTP support. Anyhow, I'm trying Pydev, and I open up a .py file, it's in the Pydev perspective and the Run As doesn't offer any options. It does when you start a Pydev project, but I don't want to start a project just to edit one single Python script, lol, I want to just open a .py file and have It Just Work... Plan 2, I try Komodo 6 Alpha 2. I actually quite like Komodo, and it's nice and snappy, offers in-built Mercurial support, as well as in-built SSH support (although it lacks SSH HTTP Proxy support, which is slightly annoying). However, for some reason, this refuses to pick up Python 3. In Edit-Preferences-Languages, there's two option, one for Python and Python3, but the Python3 one refuses to work, with either the official Python.org binaries, or ActiveState's own ActivePython 3. Of course, I can set the "Python" interpreter to the 3.1 binary, but that's an ugly hack and breaks Python 2.x support. So, does anybody who uses an IDE for Python have any suggestions on either of these accounts, or can you recommend an alternate IDE for Python 3.0 development? Cheers, Victor

    Read the article

  • css problem - 'post link'

    - by csetzkorn
    Hi, I am no css expert so I am wondering whether someone could help. I have several forms to delete an item like this: <form action="/Items/DeleteItem" class="deleteForm" method="post"> Some text <input id="Id" name="Id" value="14014" type="hidden"> <input value="Delete" class="link_button" type="submit"> </form> This should look like this: Some text Delete Here ‘Some text’ describes the item and Delete is a ‘link’ which performs a post request to delete the item (as virtually all browsers only support the post and get requests). I have started to style things (CSS below) but the ‘Delete’ ‘link’ is still slightly offset in relation to ‘Some text’ (at least in firefox). I would appreciate any help with the css. Thanks! .link_button { background-color:white; border:0; color:#034af3; text-decoration:underline; font-size:1em; font-family:inherit; cursor:pointer; float:left; margin:0; padding:0; } .deleteForm { float:right; margin:0; padding:0; } Christian

    Read the article

  • dynamically created controls and responding to control events

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • Help using RDA on a Desktop Applicaton?

    - by Joel
    I have a .NET 3.5 Compact Framework project that uses RDA for moving data between its mobile device's local SqlCe database and a remote MSSql-2008 server(it uses RDA Push and Pull). The server machine a virtual directory with sqlcesa35.dll (v3.5.5386.0) setup for RDA. We usually install these cabs on the mobile devices and the RDA process does not have any problems: sqlce.wce5.armv4i.cab sqlce.repl.wce5.armv4i.cab Now I am trying to run this application as a desktop application. RDA Pull (download) has been working well. But the RDA Push (upload) is giving me some problems. This is the exception that I get on the desktop application when I try to use RDA Push: System.Data.SqlServerCe.SqlCeException The Client Agent and Server Agent component versions are incompatible. The compatible versions are: Client Agent versions 3.0 and 3.5 with Server Agent versions 3.5 and Client Agent version 3.5 with Server Agent version 3.5. Re-install the replication components with the matching versions for client and server agents. [ 35,30,Client Agent version = ,Server Agent version = ] I have tried copying the file C:\Program Files\Microsoft SQL Server Compact Edition\v3.5\Desktop\SqlServerCe.dll (v3.5.5692.0) to bin\debug I have also tried copying another version of SqlServerCe.dll (v3.0.5206.0) to bin\debug. But this just gives me a slightly different exception: System.Data.SqlServerCe.SqlCeException [ 35,30 ] Is there a different setup or any different dlls that I need to use? Thanks, -Joel

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • What's the standard convention for creating a new NSArray from an existing NSArray?

    - by Prairiedogg
    Let's say I have an NSArray of NSDictionaries that is 10 elements long. I want to create a second NSArray with the values for a single key on each dictionary. The best way I can figure to do this is: NSMutableArray *nameArray = [[NSMutableArray alloc] initWithCapacity:[array count]]; for (NSDictionary *p in array) { [nameArray addObject:[p objectForKey:@"name"]]; } self.my_new_array = array; [array release]; [nameArray release]; } But in theory, I should be able to get away with not using a mutable array and using a counter in conjunction with [nameArray addObjectAtIndex:count], because the new list should be exactly as long as the old list. Please note that I am NOT trying to filter for a subset of the original array, but make a new array with exactly the same number of elements, just with values dredged up from the some arbitrary attribute of each element in the array. In python one could solve this problem like this: new_list = [p['name'] for p in old_list] or if you were a masochist, like this: new_list = map(lambda p: p['name'], old_list) Having to be slightly more explicit in objective-c makes me wonder if there is an accepted common way of handling these situations.

    Read the article

  • Uncaught exception while using xdebug

    - by rich97
    I'm not too great with xdebug so forgive me if this is a stupid question. But I installed it on a separate machine and it performed some magic for me like formating my var_dump() output and catching any uncaught exceptions. It also fails to format the stack traces, outputting plain text which is extremely hard to read. As I am learning the Lithium PHP Framework I am required to use php5.3, on my other machine I compiled from the source but on this machine I'm using the precompiled packages from dotdeb.org. As far as I can tell the only difference is that this is a slightly newer version of php and it comes with the Suhosin patch. The other odd thing is that the xdebug functions such as xdebug_var_dump() work, aside from poor formatting. This is an Ubuntu machine, so I don't know if it could be anything to do with the dotdep packages, but I have installed xdebug through pecl, the downloadable tarball and from the SVN repository. But to no avail. You can see my php.ini and output of php -i in the following gist. I copied php.ini from /etc/php5/apache2/php.ini over to /etc/php5/cli/php.ini so php -i should reflect my apache setup. http://gist.github.com/391675 Any help is appreciated. Rich

    Read the article

  • Unable to use stored procs in a generic repository for the entity framework. (ASP.NET MVC 2)

    - by Matt
    Hi, I have a generic repository that uses the entity framework to manipulate the database. The original code is credited to Morshed Anwar's post on CodeProject. I've taken his code and modified is slightly to fit my needs. Unfortunately I'm unable to call an Imported Function because the function is only recognized for my specific instance of the entity framework public class Repository<E, C> : IRepository<E, C>, IDisposable where E : EntityObject where C : ObjectContext { private readonly C _ctx; public ObjectResult<MyObject> CallFunction() { // This does not work because "CallSomeImportedFunction" only works on // My object. I also cannot cast it (TrackItDBEntities)_ctx.CallSomeImportedFunction(); // Not really sure why though... but either way that kind of ruins the genericness off it. return _ctx.CallSomeImportedFunction(); } } Anyone know how I can make this work with the repository I already have? Or does anyone know a generic way of calling stored procedures with entity framework? Thanks, Matt

    Read the article

  • Getting exception when trying to monkey patch pymongo.connection._Pool

    - by Creotiv
    I use pymongo 1.9 on Ubuntu 10.10 with python 2.6.6 When i trying to monkey patch pymongo.connection._Pool i'm getting error on connection: AutoReconnect: could not find master/primary But when i change _Pool class in pymongo.connection module, it work pretty fine. Even if i copy _Pool implementation from pymongo.connection module and will try to monkey patch by the same code, it still giving same exception. I need to remove threading.local from _Pool class, because i use gevent and i need to implement Pool for all mongo connections(for all threads). I use this code: import pymongo class GPool: """A simple connection pool. Uses thread-local socket per thread. By calling return_socket() a thread can return a socket to the pool. Right now the pool size is capped at 10 sockets - we can expose this as a parameter later, if needed. """ # Non thread-locals __slots__ = ["sockets", "socket_factory", "pool_size","sock"] #sock = None def __init__(self, socket_factory): self.pool_size = 10 if not hasattr(self,"sock"): self.sock = None self.socket_factory = socket_factory if not hasattr(self, "sockets"): self.sockets = [] def socket(self): # we store the pid here to avoid issues with fork / # multiprocessing - see # test.test_connection:TestConnection.test_fork for an example # of what could go wrong otherwise pid = os.getpid() if self.sock is not None and self.sock[0] == pid: return self.sock[1] try: self.sock = (pid, self.sockets.pop()) except IndexError: self.sock = (pid, self.socket_factory()) return self.sock[1] def return_socket(self): if self.sock is not None and self.sock[0] == os.getpid(): # There's a race condition here, but we deliberately # ignore it. It means that if the pool_size is 10 we # might actually keep slightly more than that. if len(self.sockets) < self.pool_size: self.sockets.append(self.sock[1]) else: self.sock[1].close() self.sock = None pymongo.connection._Pool = GPool

    Read the article

  • Can I stop the dbml designer from adding a connection string to the dbml file?

    - by drs9222
    We have a custom function AppSettings.GetConnectionString() which is always called to determine the connection string that should be used. How this function works is unimportant to the discussion. It suffices to say that it returns a connection string and I have to use it. I want my LINQ to SQL DataContext to use this so I removed all connection string informatin from the dbml file and created a partial class with a default constructor like this: public partial class SampleDataContext { public SampleDataContext() : base(AppSettings.GetConnectionString()) { } } This works fine until I use the designer to drag and drop a table into the diagram. The act of dragging a table into the diagram will do several unwanted things: A settings file will be created A app.config file will be created My dbml file will have the connection string embedded in it All of this is done before I even save the file! When I save the diagram the designer file is recreated and it will contain its own default constructor which uses the wrong connection string. Of course this means my DataContext now has two default constructors and I can't build anymore! I can undo all of these bad things but it is annoying. I have to manually remove the connection string and the new files after each change! Is there anyway I can stop the designer from making these changes without asking? EDIT The requirement to use the AppSettings.GetConnectionString() method was imposed on me rather late in the game. I used to use something very similar to what it generates for me. There are quite a few places that call the default constructor. I am aware that change them all to create the data context in another way (using a different constructor, static method, factory, ect..). That kind of change would only be slightly annoying since it would only have to be done once. However, I feel, that it is sidestepping the real issue. The dbml file and configuration files would still contain an incorrect, if unused, connection string which at best could confuse other developers.

    Read the article

  • Why Does This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • Migrate style from TabItem to TabHeader

    - by OffApps Cory
    Good day! I have a TabControl with TabItems that have been customized via a control template. This control template specifies a trigger whereby on mouseover, the content of tab header grows slightly. <ControlTemplate> <Storyboard x:Key="TabHeaderGrow"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="TabName" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.1000000" Value="1.1"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="TabName" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleY)"> <SplineDoubleKeyFrame KeyTime="00:00:00.1000000" Value="1.1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <ControlTemplate.Triggers> <EventTrigger RoutedEvent="Mouse.MouseEnter"> <BeginStoryboard Storyboard="{StaticResource TabHeaderGrow}"/> </EventTrigger> When I mouseover any of the tabs they work as expected, however the trigger also fires when I mouseover any of the elements in the tab body. I know that I need to migrate the control styling to a tabHeader controlTemplate, but I am unsure how to do that. I can't seem to do template binding for the content of the tabheader. Any help would be appreciated.

    Read the article

  • Advanced tasks using Web.Config transformation

    - by dcadenas
    Does anyone know if there is a way to "transform" specific sections of values instead of replacing the whole value or an attribute? For example, I've got several appSettings entries that specify the Urls for different webservices. These entries are slightly different in the dev environment than the production environment. Some are less trivial than others <!-- DEV ENTRY --> <appSettings> <add key="serviceName1_WebsService_Url" value="http://wsServiceName1.dev.domain.com/v1.2.3.4/entryPoint.asmx" /> <add key="serviceName2_WebsService_Url" value="http://ma1-lab.lab1.domain.com/v1.2.3.4/entryPoint.asmx" /> </appSettings> <!-- PROD ENTRY --> <appSettings> <add key="serviceName1_WebsService_Url" value="http://wsServiceName1.prod.domain.com/v1.2.3.4/entryPoint.asmx" /> <add key="serviceName2_WebsService_Url" value="http://ws.ServiceName2.domain.com/v1.2.3.4/entryPoint.asmx" /> </appSettings> So far, I know I can do something like this in the Web.Release.Config: <add xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)" key="serviceName1_WebsService_Url" value="http://wsServiceName1.prod.domain.com/v1.2.3.4/entryPoint.asmx" /> <add xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)" key="serviceName2_WebsService_Url" value="http://ws.ServiceName2.domain.com/v1.2.3.4/entryPoint.asmx" /> However, everytime the Version for that webservice is updated, I would have to update the Web.Release.Config as well, which defeats the purpose of simplfying my web.config updates. I know I could also split that URL into different sections and update them independently, but I rather have it all in one key. I've looked through the available web.config Transforms but nothings seems to be geared towars what I am trying to accomplish. These are the websites I am using as a reference: Vishal Joshi's blog, MSDN Help, and Channel9 video Any help would be much appreciated! -D

    Read the article

  • Standard Android Button with a different color

    - by Mike
    I'd like to change the color of a standard Android button slightly in order to better match a client's branding. For example, see the "Find a Table" button for the OpenTable application: The best way I've found to do this so far is to change the Button's drawable to the following drawable located in res/drawable/red_button.xml: <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_pressed="true" android:drawable="@drawable/red_button_pressed" /> <item android:state_focused="true" android:drawable="@drawable/red_button_focus" /> <item android:drawable="@drawable/red_button_rest" /> </selector> But doing that requires that I actually create three different drawables for each button I want to customize (one for the button at rest, one when focused, and one when pressed). That seems more complicated and non-DRY than I need. All I really want to do is apply some sort of color transform to the button. Is there an easier way to go about changing a button's color than I'm doing?

    Read the article

  • ASP.net drop down dynamically styling and then remembering the styles on aborted submit

    - by peacedog
    So, I've got an ASP drop down list (this is .net 2.0). I'm binding it with data. Basically, when the page loads and it's not a post back we'll fetch record data, bind all the drop downs, and set them to their appropriate values (strictly speaking we: initialize page with basic set of data from DB, bind drop downs from DB, fetch actual record data from DB, set drown downs to appropriate settings at this time). What I want to do is selectively style the list options. So the database returns 3 items: ID, Text, and a flag indicating whether I the record is "active" (and I'll style appropriately). It's easy enough to do and I've done it. My problem is what happens when a form submission is halted. We have slightly extended the Page class and created an AddError() method, which will create a list of errors from failed business rule checks and then display them in a ValidationSummary. It works something like this, in the submit button's click event: CheckBizRules(); if(Page.IsValid) { SaveData(); } If any business rule check fails, the Page will not be valid. The problem is, when the page re-renders (viewsate is enabled, but no data is rebound) my beautiful conditional styling is now sadly gone, off to live in the land of the missing socks. I need to preserve it. I was hoping to avoid another DB call here (e.g. getting the list data back from the DB again if the page isn't valid, just for purposes of re-styling the list). But it's not the end of the world if that's my course of action. I was hoping someone might have an alternative suggestion. I couldn't think of how to phrase this question better, if anyone has any suggestions or needs clarification don't hesitate to get it, by force if need be. ;)

    Read the article

  • Bring a subview to the top on mouseDown: AND keep receiving events (for dragging)

    - by d11wtq
    Ok, basically I have a view with a series of slightly overlapping subviews. When a subview is clicked, it moves to the top (among other things), by a really simple two-liner: -(void)mouseDown:(NSEvent *)theEvent { if (_selected) { // Don't do anything this subview is already in the foreground return; } NSView *superview = [self superview]; [self removeFromSuperview]; [superview addSubview:self position:NSWindowAbove relativeTo:nil]; } This works fine and seems to be the "normal" way to do this. The problem is that I'm now introducing some drag logic to the view, so I need to respond to -mouseDragged:. Unfortunately, since the view is removed from the view hierarchy and re-added in the process, I can have the view come to the foreground and be dragged in the same mouse action. It only drags if I leave go of the mouse and click on the view again, since the second click doesn't do any view hierarchy juggling. Is there anything I can do that will allow the view to move to the foreground like this, AND continue to receive the subsequent -mouseDragged: and -mouseUp: events that follow? I started along the lines of thinking of overriding -hitTest: in the superview and intercepting the mouseDown event in order to bring the view to the foreground before it actually receives the event. The problem here is, from within -hitTest:, how do I distinguish what type of event I'm actually performing the hit test for? I wouldn't want to move the subview to the foreground for other mouse events, such as mouseMoved etc.

    Read the article

  • spring - constructor injection and overriding parent definition of nested bean

    - by mdma
    I've read the Spring 3 reference on inheriting bean definitions, but I'm confused about what is possible and not possible. For example, a bean that takes a collaborator bean, configured with the value 12 <bean name="beanService12" class="SomeSevice"> <constructor-arg index="0"> <bean name="beanBaseNested" class="SomeCollaborator"> <constructor-arg index="0" value="12"/> </bean> </constructor-arg> </bean> I'd then like to be able to create similar beans, with slightly different configured collaborators. Can I do something like <bean name="beanService13" parent="beanService12"> <constructor-arg index="0"> <bean> <constructor-arg index="0" value="13"/> </bean> </constructor> </bean> I'm not sure this is possible and, if it were, it feels a bit clunky. Is there a nicer way to override small parts of a large nested bean definition? It seems the child bean has to know quite a lot about the parent, e.g. constructor index. I'd prefer not to change the structure - the parent beans use collaborators to perform their function, but I can add properties and use property injection if that helps. This is a repeated pattern, would creating a custom schema help? Thanks for any advice!

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • Which version of Grady Booch's OOA/D book should I buy?

    - by jackj
    Grady Booch's "Object-Oriented Analysis and Design with Applications" is available brand new in both the 2nd edition (1993) and the 3rd edition (2007), while many used copies of both editions are available. Here are my concerns: 1) The 2nd edition uses C++: given that I just finished reading my first two C++ books (Accelerated C++ and C++ Primer) I guess practical tips can only help, so the 2nd edition is probably best (I think the 3rd edition has absolutely no code). On the other hand, the C++ books I read insist on the importance of using standard C++, whereas Booch's 2nd edition was published before the 1998 standard. 2) The 2nd edition is shorter (608 pages vs. 720) so, I guess, it will be slightly easier to get through. 3) The 3rd edition uses UML 2.0, whereas the 2nd edition is pre-UML. Some reviews say that the notation in the 2nd edition is close enough to UML, so it doesn't matter, but I don't know if I should be worrying about this or not. 4) The 2nd edition is available in good-shape used copies for considerably less than what the 3rd one goes for. Given all the above factors, do you think I should buy the 2nd or the 3rd edition? Recommendations on other books are also welcome but I would prefer it if whoever answers has read at least one of the versions of Booch's book (preferably both!). I have already bought but not read GoF and Riel's books. I also know that I should practice a lot with real-life code. Thanks.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >