Search Results

Search found 45542 results on 1822 pages for 'enable add ons'.

Page 321/1822 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Cryptswap not mounted?

    - by woody
    I believe i have my swap set up but am not sure because on start up it says that it is something along the lines of "could not mount /dev/mapper/cryptswap1 M for manual S for skip". But it appears to be mounted. When i run free -m the output is: total used free shared buffers cached Mem: 3887 769 3117 0 54 348 -/+ buffers/cache: 366 3520 Swap: 4026 0 4026 and sudo bklid is: /dev/sda1: UUID="9fb3ccd6-3732-4989-bfa4-e943a09f1153" TYPE="ext4" /dev/mapper/cryptswap1: UUID="bd9fe154-8621-48b3-95d2-ae5c91f373fd" TYPE="swap" and cat /etc/crypttab is: cryptswap1 /dev/sda5 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 my /etc/fstab is: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=9fb3ccd6-3732-4989-bfa4-e943a09f1153 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation #UUID=bb0e378e-8742-435a-beda-ae7788a7c1b0 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 The file /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla already exists and its contents are: [Re-enable hibernate by default] Identity=unix-user:* Action=org.freedesktop.upower.hibernate ResultActive=yes Is my swap not setup correctly or is it not mounting?

    Read the article

  • Binding BoundingSpheres to a world matrix in XNA

    - by NDraskovic
    I made a program that loads the locations of items on the scene from a file like this: using (StreamReader sr = new StreamReader(OpenFileDialog1.FileName)) { String line; while ((line = sr.ReadLine()) != null) { red = line.Split(','); model = row[0]; x = row[1]; y = row[2]; z = row[3]; elements.Add(Convert.ToInt32(model)); data.Add(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z))); sfepheres.Add(new BoundingSphere(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z)), 1f)); } I also have a list of BoundingSpheres (called spheres) that adds a new bounding sphere for each line from the file. In this program I have one item (a simple box) that moves (it has its world matrix called matrixBox), and other items are static entire time (there is a world matrix that holds those elements called simply world). The problem i that when I move the box, bounding spheres move with it. So how can I bind all BoundingSpheres (except the one corresponding to the box) to the static world matrix so that they stay in their place when the box moves?

    Read the article

  • BoundingSpheres move when they should not

    - by NDraskovic
    I have a XNA 4.0 project in which I load a file that contains type and coordinates of items I need to draw to the screen. Also I need to check if one particular type (the only movable one) is passing in front or trough other items. This is the code I use to load the configuration: if (ks.IsKeyDown(Microsoft.Xna.Framework.Input.Keys.L)) { this.GraphicsDevice.Clear(Color.CornflowerBlue); Otvaranje.ShowDialog(); try { using (StreamReader sr = new StreamReader(Otvaranje.FileName)) { String linija; while ((linija = sr.ReadLine()) != null) { red = linija.Split(','); model = red[0]; x = red[1]; y = red[2]; z = red[3]; elementi.Add(Convert.ToInt32(model)); podatci.Add(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z))); sfere.Add(new BoundingSphere(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z)), 1f)); } } } catch (Exception ex) { Window.Title = ex.ToString(); } } The "Otvaranje" is an OpenFileDialog object, "elementi" is a List (determines the type of item that would be drawn), podatci is a List (determines the location where the items will be drawn) and sfere is a List. Now I solved the picking algorithm (checking for ray and bounding sphere intersection) and it works fine, but the collision detection does not. I noticed, while using picking, that BoundingSphere's move even though the objects that they correspond to do not. The movable object is drawn to the world1 Matrix, and the static objects are drawn into the world2 Matrix (world1 and world2 have the same values, I just separated them so that the static elements would not move when the movable one does). The problem is that when I move the item I want, all boundingSpheres move accordingly. How can I move only the boundingSphere that corresponds to that particular item, and leave the rest where they are?

    Read the article

  • SEO, IIS 7 and web.config in subfolder issue

    - by tesicg
    We have ASP.NET application that has sub-folder with .aspx pages and separate web.config file in it. The .aspx pages in that sub-folder behave as separate site. In the web.config file at application level, I set the rule that removing trailing slashes: <rewrite> <rules> <rule name="RemoveTrailingSlashRule1" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> </rules> </rewrite> I expect this rule will propagate downward to sub-folder as well. To access the site in sub-folder we should type: http://concert.local/elki/ and get it without trailing slash as: http://concert.local/elki But, the trailing slash remains. The web.config file in sub-folder looks as following: <configuration> <system.webServer> <defaultDocument> <files> <add value="Sections.aspx" /> </files> </defaultDocument> </system.webServer> </configuration>

    Read the article

  • How to find out when to increase bit rate? (TCP streaming solution)

    - by Kabumbus
    How to find out when to increase bit rate? (TCP streaming solution) We have a stream with "frames". each "frame" has a "timestamp" . frames have bit rate property which is actually there size. We generate frames with our app and stream them one by one on to our TCP server socket. At the same time server post replies so when after each sent frame we try to read from socket we receive which timestamp is currently on server. if timestamp is lover than previous frame we lover bit rate 20%. Such scheme seems to work giving me one way vbr (lowering) but I wonder how to implement increase? I mean we can always try to increase 5% each frame until some limited desired value but each time we have delay will lose real-time feature of our stream... Generally such scheme is for finding out how much of network stream is currently used by other user apps and give picture of how much server is loaded at the same time so we can stream just right amount of data for all to receive it in real time. So what shall I do to add increase to my scheme? So having current bit rate of A I thought we could add +7% for 3 frames and than one -20% and than if all that 3 frames with +7% came in time we could add 14% to A and repeat circle and it would hopefully not be really noticeable if 2nd frame wold come to us with delay... probably this one is too localised because it is a requirement for me to use TCP.

    Read the article

  • OSSEC HIDS Notification "Unknown problem somewhere in the system." (seems like hdd issue)

    - by John
    from what i understand somethings is wrong with hdd i am trying to find some commands in order to run some tests to check if hard disk is OK I will post a full list of logs after REBOOT of system: "Unknown problem somewhere in the system." kernel: ata2.00: failed command: READ FPDMA QUEUED kernel: res 51/40:c8:38:5c:16/00:00:00:00:00/40 Emask 0x409 (media error) <F> kernel: ata2.00: error: { UNC } kernel: ata2.00: failed command: READ FPDMA QUEUED kernel: res 51/40:78:88:5c:16/00:00:00:00:00/40 Emask 0x409 (media error) <F> kernel: sd 1:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: md/raid1:md1: read error corrected (8 sectors at 1461400 on sda1) kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: md/raid1:md1: read error corrected (8 sectors at 1461672 on sda1) Also some of this logs are duplicate or even more. Thanks.

    Read the article

  • Is there a standard way to track 2d tile positions both locally and on screen?

    - by Magicked
    I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering: // enable textures since we're going to use these for our sprites glEnable(GL_TEXTURE_2D); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // enable alpha blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); glMatrixMode(GL_MODELVIEW); I assume I need to change: glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); to: glOrtho(0, WIDTH, 0, HEIGHT, 1, -1);

    Read the article

  • Proper library for enums

    - by Bobson
    I'm trying to refactor some code such that the display is separate from the implementation, and I'm not sure where to put the existing enums. My project is currently structured as follows: Utilities RemoteData (Depends on: Utilities) LocalData (Depends on: RemoteData, Utilities) RemoteWeb (Depends on: RemoteData, Utilities) LocalWeb (Depends on: RemoteData, LocalData, Utilities) I'm now trying to add "ViewLibrary (Depends on: Utilities)" to this list, and then adding it as a new dependency to both RemoteWeb and LocalWeb. It will contain a set of interfaces which the other two projects will implement, use to populate the view, and then consume the result. There's an enum which is currently used in all the projects except Utilities. It thus lives in the RemoteData project, because everything else depends on it. But this new ViewLibrary won't depend on either data project. So how will it know about this enum? Some options I see: Create a new project just for shared enum values. Add it to Utilities, even though it is related to data. Define it a second time in ViewLibrary, and require both RemoteWeb and LocalWeb to convert the one type into the other when they access the shared views. Add a dependency on RemoteData to the ViewLibrary, even though it's supposed to be independent of data-source. Are there any better options? Is this structure flawed to begin with?

    Read the article

  • Oracle Secure Global Desktop (SGD) 5.1

    - by wcoekaer
    Last week, we released the latest update of Oracle Secure Global Desktop. Release 5.1 introduces a number of bug fixes and smaller changes but the most interesting one is definitely increased support for html5-based client access. In SGD 5.0 we added support for Apple iPads using Safari to connect to SGD and display your session right inside the browser. The traditional model for SGD is that you connect using a webbrowser to the webtop and applications that are displayed locally using a local client (tta). This client gets installed the first time you connect. So in the traditional model (which works very well...) you need a webbrowser, java and the tta client. With the addition of html5 support, there's no longer a need to install a local client, in fact, there is also no longer a need to have java installed. We currently support Chrome as a browser to enable html5 clients. This allows us to enable html5 on the android devices and also on desktops running Chrome (Windows, MacOS X, Linux). Connections will work transparently across proxy servers as well. So now you can run any SGD published app or desktop right from your webbrowser inside a browser window. This is very convenient and cool.

    Read the article

  • How to populate a private container for unit test?

    - by Sardathrion
    I have a class that defines a private (well, __container to be exact since it is python) container. I am using the information within said container as part of the logic of what the class does and have the ability to add/delete the elements of said container. For unit tests, I need to populate this container with some data. That date depends on the test done and thus putting it all in setUp() would be impractical and bloated -- plus it could add unwanted side effects. Since the data is private, I can only add things via the public interface of the object. This run codes that need not be run during a unit test and in some case is just a copy and paste from another test. Currently, I am mocking the whole container but somehow it does not feel that elegant a solution. Due to Python mocking frame work (mock), this requires the container to be public -- so I can use patch.dict(). I would rather keep that data private. What pattern can one use to still populate the containers without excising the public method so I have data to test with? Is there a way to do this with mock' patch.dict() that I missed?

    Read the article

  • How to setup users for desktop app with SQL Azure as backend?

    - by Manuel
    I'm considering SQL Azure as DB for a new application I'm developing. The reason I want to go with Azure is because I don't want to have to maintain yet another database(s) and I want my users to be able to access the data from anywhere. The problem is that I'm not clear regarding how to users will connect. The application is a basic CRUD type of windows app. I've read that you need to add your IP to SQL Azure's firewall to connect to it, but I don't know if it's only for administration purposes only. Can anyone clarify if anyone (anywhere) can access the data with the proper credentials? Which of the following scenarios would work best (if at all)? A) Add each user to SQL Azure and have the app connect directly to Azure as if it was connecting to SQL Server B) Add an anonymous user SQL Azure and pass the real user's password/hash with every call so the Azure database can service the requests accordingly. C) Put a WCF service in between so that it handles the authentication stuff. The service will only serve the appropriate information to the user given his/her authentication and SQL Azure would be open to the service exclusively. D) - ideas are welcomed - This is confusing because all Azure examples I see are for websites. I have a hard time believing SQL Azure wouldn't handle the case of desktop apps connecting to it. So what's the best practice?

    Read the article

  • JustCode Q1 SP1 &ndash;Typing Assistance Improvements

    Our goal is to make JustCodes typing assistance save keystrokes, without getting in your way. As such, typing assistance received quite a few changes for SP1.  It now works faster, and better than ever :)      Bracket Completion When you add an  open bracket at the beginning of a line, JustCode will now add the closing bracket for you at the end of the line, even if there is code. When you hit enter, the auto formatter will run.    Prevents Bracket Doubling The SP1 release now does its best to prevent you from doubling parenthesis, braces, and brackets. Instead of doubling up these items up, it will move your cursor outside the closing item.   Auto Format on Semi-Colon, and Close Bracket JustCode now automatically runs the auto formatter when you press semi-colon, or add a closing bracket at the end of a line.    Auto Completion of Singe & ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to Activate wifi in Toshiba Satellite C655?

    - by user4106
    I've recently bought a Toshiba Satellite C655. It came with Windows 7 preinstalled. I've never had a notebook before, but as a desktop user, I was a Ubuntu user since 2 years, and I've never had a problem with drivers, wifi, etc. When I tried to install the Ubuntu 10.04, and also the new and fresh 10.10, in my new laptop, I experienced some troubles with some of the componentes of my computer. For example, I was not able to activate my wi-fi card, although I know the kernel recognizes it correctly, because when doing "lspci" at the terminal, it was listed. Anyhow, I'm not able to "activate" the wifi, or whatever it's necessary to do in order to be able to search for public networks available, and to connect with them. The wifi-card the laptops brings is the (the lspci output): 03:00.0 Network Controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01). It's anything in you can help me? Thanks a lot in advance! Edit Neither solution seems to work. In first place, i've tried installig what hhlp told me. After the installation, nothing seems to change: on right-clicking the wireless icon, it seems to recognize the card, because the option "Enable wifi" was ticked. But, once again, i was not able to "turn the wi-fi" on. In second place, i didn't try installing the drives, because the card is already recongnized. The issue is that i cannot seem to turn it on! One thing i've probably missed is that the Toshiba cames with a windows sofntware that allows you to enable / disable the wifi tools. So, it does not have an external "button" to turn it off. I don't know if that's the problem, but i have the feeling that the issue may be aroud there: in how to turn ON the wifi-signal (or to verify if it's on or off) in my ubuntu.

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by adelsors
    Imagine you have a Web application using an in-memory collection that changes occasionally, loading it from storage on the Application_Start global.asax event and updating it whenever it changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because that the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles: 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationServices.svc 3.   Add a method on the new service to receive notifications from other role instances. 4.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint.   Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service, this is provided by the platform. 5.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service    6. Now you can notify changes to other instances using this code:

    Read the article

  • Password correct? then redirect [migrated]

    - by RevCity
    So I have this code and I need it to only redirect when the correct password is added. Only problem is that I dont know what to add that will make it only redirect if the password is "hello" I understand that using "view-source" would reveal the password, I don't mind that, its the way I want it. I basically just need to know what to add and where to add it. Sooo: Redirects if "hello" is typed into password field. Does nothing if anything else is put into the password field. <div class="wrapper"> <form class="form1" action="http://google.com"> <div class="formtitle">Enter the password to proceed</div> <div class="input nobottomborder"> <div class="inputtext">Password: </div> <div class="inputcontent"> <input type="password" /> <br/> </div> </div> <div class="buttons"> <input class="orangebutton" type="submit" value="Login" /> </div> </div> I hope this was clear and could be understood.

    Read the article

  • SQL 2008/2005 Hosting :: Error - “Named Pipes Provider, error: 40 – Could not open a connection to SQL Server”

    - by mbridge
    When setting up a Microsoft Windows Server 2008 system, I went through the motions to set up IIS, MS SQL Server 2008, and Visual Studio 2010 to use as a test-bed. One of the immediate benefits of setting up such a system is that most development can be done remotely: MS SQL Server Management Studio, Visual Studio’s Web development suite, as well as file shares, remote desktop, etc, make for a great way to remotely develop in ‘pristine’ conditions. But there are drawbacks, too, such as needing to deal with firewall issues, not being able to penetrate past a router or the requirement of setting up a VPN. One of the problems I encountered when trying to remote into the MS SQL Server 2008 that I’d set up was the following error: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server I followed the below steps, and was able to connect to the server after just a few moments of tinkering: 1. From the server in question, surf to this Microsoft article, and download and install the Firewall rules modification program. Never drop your firewall, even on a development machine, unless you have a really good reason to. 2. Launch SQL Server Configuration Manager. Navigate to SQL Server Network Configuration, then Protocols for your server name. Enable TCP/IP and Named Pipes by right-clicking and choosing Enable for each given Protocol Name. 3. Restart the SQL Server service from Services (or from command line, subsequently run “net stop mssqlserver” then “net start mssqlserver”. 4. Try your remote connection once more, and you should be able to connect. It’s not a terribly difficult concept, but one of the more challenging tasks developers face is dealing with environment setup. And while there is a certain blurred-line overlap between software development and server administration, sometimes the latter is daunting, especially given that you might set up only a handful of servers during your career.

    Read the article

  • Significant amount of the time, I can't think of a reason to have an object instead of a static class. Do objects have more benefits than I think?

    - by Prog
    I understand the concept of an object, and as a Java programmer I feel the OO paradigm comes rather naturally to me in practice. However recently I found myself thinking: Wait a second, what are actually the practical benefits of using an object over using a static class (with proper encapsulation and OO practices)? I could think of two benefits of using an object (both significant and powerful): Polymorphism: allows you to swap functionality dynamically and flexibly during runtime. Also allows to add new functionality 'parts' and alternatives to the system easily. For example if there's a Car class designed to work with Engine objects, and you want to add a new Engine to the system that the Car can use, you can create a new Engine subclass and simply pass an object of this class into the Car object, without having to change anything about Car. And you can decide to do so during runtime. Being able to 'pass functionality around': you can pass an object around the system dynamically. But are there any more advantages to objects over static classes? Often when I add new 'parts' to a system, I do so by creating a new class and instantiating objects from it. But recently when I stopped and thought about it, I realized that a static class would do just the same as an object, in a lot of the places where I normally use an object. For example, I'm working on adding a save/load-file mechanism to my app. With an object, the calling line of code will look like this: Thing thing = fileLoader.load(file); With a static class, it would look like this: Thing thing = FileLoader.load(file); What's the difference? Fairly often I just can't think of a reason to instantiate an object when a plain-old static-class would act just the same. But in OO systems, static classes are fairly rare. So I must be missing something. Are there any more advantages to objects other from the two that I listed? Please explain.

    Read the article

  • MySQL December Webinars

    - by Bertrand Matthelié
    We'll be running 3 webinars next week and hope many of you will be able to join us: MySQL Replication: Simplifying Scaling and HA with GTIDs Wednesday, December 12, at 15.00 Central European TimeJoin the MySQL replication developers for a deep dive into the design and implementation of Global Transaction Identifiers (GTIDs) and how they enable users to simplify MySQL scaling and HA. GTIDs are one of the most significant new replication capabilities in MySQL 5.6, making it simple to track and compare replication progress between the master and slave servers. Register Now MySQL 5.6: Building the Next Generation of Web/Cloud/SaaS/Embedded Applications and Services Thursday, December 13, at 9.00 am Pacific Time As the world's most popular web database, MySQL has quickly become the leading cloud database, with most providers offering MySQL-based services. Indeed, built to deliver web-based applications and to scale out, MySQL's architecture and features make the database a great fit to deliver cloud-based applications. In this webinar we will focus on the improvements in MySQL 5.6 performance, scalability, and availability designed to enable DBA and developer agility in building the next generation of web-based applications. Register Now Getting the Best MySQL Performance in Your Products: Part IV, Partitioning Friday, December 14, at 9.00 am Pacific Time We're adding Partitioning to our extremely popular "Getting the Best MySQL Performance in Your Products" webinar series. Partitioning can greatly increase the performance of your queries, especially when doing full table scans over large tables. Partitioning is also an excellent way to manage very large tables. It's one of the best ways to build higher performance into your product's embedded or bundled MySQL, and particularly for hardware-constrained appliances and devices. Register Now We have live Q&A during all webinars so you'll get the opportunity to ask your questions!

    Read the article

  • WCF security via message headers

    - by exalted
    I'm trying to implement "some sort of" server-client & zero-config security for some WCF service. The best (as well as easiest to me) solution that I found on www is the one described at http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx (client-side) and http://www.dotnetjack.com/post/Processing-custom-WCF-header-values-at-server-side.aspx (corrisponding server-side). Below is my implementation for RequestAuth (descibed in the first link above): using System; using System.Diagnostics; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Dispatcher; using System.ServiceModel.Description; using System.ServiceModel.Channels; namespace AuthLibrary { /// <summary> /// Ref: http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx /// </summary> public class RequestAuth : BehaviorExtensionElement, IClientMessageInspector, IEndpointBehavior { [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerName = "AuthKey"; [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerNamespace = "http://some.url"; public override Type BehaviorType { get { return typeof(RequestAuth); } } protected override object CreateBehavior() { return new RequestAuth(); } #region IClientMessageInspector Members // Keeping in mind that I am SENDING something to the server, // I only need to implement the BeforeSendRequest method public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { throw new NotImplementedException(); } public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel) { MessageHeader<string> header = new MessageHeader<string>(); header.Actor = "Anyone"; header.Content = "TopSecretKey"; //Creating an untyped header to add to the WCF context MessageHeader unTypedHeader = header.GetUntypedHeader(headerName, headerNamespace); //Add the header to the current request request.Headers.Add(unTypedHeader); return null; } #endregion #region IEndpointBehavior Members public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { throw new NotImplementedException(); } public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { clientRuntime.MessageInspectors.Add(this); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { throw new NotImplementedException(); } public void Validate(ServiceEndpoint endpoint) { throw new NotImplementedException(); } #endregion } } So first I put this code in my client WinForms application, but then I had problems signing it, because I had to sign also all third-party references eventhough http://msdn.microsoft.com/en-us/library/h4fa028b(v=VS.80).aspx at section "What Should Not Be Strong-Named" states: In general, you should avoid strong-naming application EXE assemblies. A strongly named application or component cannot reference a weak-named component, so strong-naming an EXE prevents the EXE from referencing weak-named DLLs that are deployed with the application. For this reason, the Visual Studio project system does not strong-name application EXEs. Instead, it strong-names the Application manifest, which internally points to the weak-named application EXE. I expected VS to avoid this problem, but I had no luck there, it complained about all the unsigned references, so I created a separate "WCF Service Library" project inside my solution containing only code above and signed that one. At this point entire solution compiled just okay. And here's my problem: When I fired up "WCF Service Configuration Editor" I was able to add new behavior element extension (say "AuthExtension"), but then when I tried to add that extension to my end point behavior it gives me: Exception has been thrown by the target of an invocation. So I'm stuck here. Any ideas?

    Read the article

  • Best practices for using the Entity Framework with WPF DataBinding

    - by Ken Smith
    I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC; familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending)); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person) { entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person); } I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere, entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently?

    Read the article

  • C# powershell output reader iterator getting modified when pipeline closed and disposed.

    - by scope-creep
    Hello, I'm calling a powershell script from C#. The script is pretty small and is "gps;$host.SetShouldExit(9)", which list process, and then send back an exit code to be captured by the PSHost object. The problem I have is when the pipeline has been stopped and disposed, the output reader PSHost collection still seems to be written to, and is filling up. So when I try and copy it to my own output object, it craps out with a OutOfMemoryException when I try to iterate over it. Sometimes it will except with a Collection was modified message. Here is the code. private void ProcessAndExecuteBlock(ScriptBlock Block) { Collection<PSObject> PSCollection = new Collection<PSObject>(); Collection<Object> PSErrorCollection = new Collection<Object>(); Boolean Error = false; int ExitCode=0; //Send for exection. ExecuteScript(Block.Script); // Process the waithandles. while (PExecutor.PLine.PipelineStateInfo.State == PipelineState.Running) { // Wait for either error or data waithandle. switch (WaitHandle.WaitAny(PExecutor.Hand)) { // Data case 0: Collection<PSObject> data = PExecutor.PLine.Output.NonBlockingRead(); if (data.Count > 0) { for (int cnt = 0; cnt <= (data.Count-1); cnt++) { PSCollection.Add(data[cnt]); } } // Check to see if the pipeline has been closed. if (PExecutor.PLine.Output.EndOfPipeline) { // Bring back the exit code. ExitCode = RHost.ExitCode; } break; case 1: Collection<object> Errordata = PExecutor.PLine.Error.NonBlockingRead(); if (Errordata.Count > 0) { Error = true; for (int count = 0; count <= (Errordata.Count - 1); count++) { PSErrorCollection.Add(Errordata[count]); } } break; } } PExecutor.Stop(); // Create the Execution Return block ExecutionResults ER = new ExecutionResults(Block.RuleGuid,Block.SubRuleGuid, Block.MessageIdentfier); ER.ExitCode = ExitCode; // Add in the data results. lock (ReadSync) { if (PSCollection.Count > 0) { ER.DataAdd(PSCollection); } } // Add in the error data if any. if (Error) { if (PSErrorCollection.Count > 0) { ER.ErrorAdd(PSErrorCollection); } else { ER.InError = true; } } // We have finished, so enque the block back. EnQueueOutput(ER); } and this is the PipelineExecutor class which setups the pipeline for execution. public class PipelineExecutor { private Pipeline pipeline; private WaitHandle[] Handles; public Pipeline PLine { get { return pipeline; } } public WaitHandle[] Hand { get { return Handles; } } public PipelineExecutor(Runspace runSpace, string command) { pipeline = runSpace.CreatePipeline(command); Handles = new WaitHandle[2]; Handles[0] = pipeline.Output.WaitHandle; Handles[1] = pipeline.Error.WaitHandle; } public void Start() { if (pipeline.PipelineStateInfo.State == PipelineState.NotStarted) { pipeline.Input.Close(); pipeline.InvokeAsync(); } } public void Stop() { pipeline.StopAsync(); } } An this is the DataAdd method, where the exception arises. public void DataAdd(Collection<PSObject> Data) { foreach (PSObject Ps in Data) { Data.Add(Ps); } } I put a for loop around the Data.Add, and the Collection filled up with 600k+ so feels like the gps command is still running, but why. Any ideas. Thanks in advance.

    Read the article

  • Rails HABTM accepts_nested_attributes_for mapping table

    - by Rabbott
    Currently I have a habtm relationship between a datastream (stream), and a chart. I want to be able to use the 'typical' jquery "add new stream" method to add a new entry into a mapping table that holds the chart_id, and stream_id.. But I think that most examples out there are made to handle a one to many relationship.. The functionality provided here by Ryan Bates is what I'm looking for, But I dont want to use checkboxes, I want to use a select (drop down) to add new items to the mapping table. I need THIS and THIS combined chart.rb class Chart < ActiveRecord::Base has_and_belongs_to_many :streams, :readonly => false, :join_table => 'charts_streams' accepts_nested_attributes_for :streams, :reject_if => lambda { |a| a.values.all?(&:blank?) }, :allow_destroy => true stream.rb class Stream < ActiveRecord::Base has_and_belongs_to_many :charts, :readonly => false, :join_table => 'charts_streams' application.js /* * Method used to add new child form partials */ $('form a.add_child').click(function() { var assoc = $(this).attr('data-association'); var content = $('#' + assoc + '_fields_template').html(); var regexp = new RegExp('new_' + assoc, 'g'); var new_id = new Date().getTime(); var newElements = jQuery(content.replace(regexp, new_id)).hide(); $(this).parent().before(newElements).prev().slideFadeToggle(); return false; }); /* * Method used to remove child form partials */ $('form a.remove_child').live('click', function() { if(confirm('Are you sure?')) { var hidden_field = $(this).prev('input[type=hidden]')[0]; if(hidden_field) { hidden_field.value = '1'; } $(this).parents('.fields').slideFadeToggle(); } return false; }); _form.html.erb <div id="streams"> <h4>Streams</h4> <% form.fields_for :streams do |stream_form| %> <%= render :partial => "stream", :locals => {:f => stream_form} %> <% end %> </div> <p><%= add_child_link 'Add a stream', :streams %></p> <%= new_child_fields_template(form, :streams) %> _stream.html.erb <div class="fields"> <p> <%= f.label :stream_id %><br /> <%= select_tag "chart[stream_ids][]", options_for_select(@streams.map {|s| [s.label, s.id]}, f.object.id) %> </p> <p><%= remove_child_link 'Remove stream', f %></p> </div> This may be overkill and what I am looking to do could be much easier.. Basically when I create a chart I want to be able to add as many streams to it as I want, using the javascript for adding a new entry, and removing an old one... Thanks for the help! This is 'working' but gives me a ActiveRecord::ReadOnlyRecord error when it hits the chart.update_attributes method.. am I adding the the :readonly = false in the wrong spot? Or am I just doing it completely wrong?

    Read the article

  • backtracking in haskell

    - by dmindreader
    I have to traverse a matrix and say how many "characteristic areas" of each type it has. A characteristic area is defined as a zone where elements of value n or n are adjacent. For example, given the matrix: 0 1 2 2 0 1 1 2 0 3 0 0 There's a single characteristic area of type 1 which is equal to the original matrix: 0 1 2 2 0 1 1 2 0 3 0 0 There are two characteristic areas of type 2: 0 0 2 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 3 0 0 And one characteristic area of type 3: 0 0 0 0 0 0 0 0 0 3 0 0 So, for the function call: countAreas [[0,1,2,2],[0,1,1,2],[0,3,0,0]] The result should be [1,2,1] I haven't defined countAreas yet, I'm stuck with my visit function when it has no more possible squares in which to move it gets stuck and doesn't make the proper recursive call. I'm new to functional programming and I'm still scratching my head about how to implement a backtracking algorithm here. Take a look at my code, what can I do to change it? move_right :: (Int,Int) -> [[Int]] -> Int -> Bool move_right (i,j) mat cond | (j + 1) < number_of_columns mat && consult (i,j+1) mat /= cond = True | otherwise = False move_left :: (Int,Int) -> [[Int]] -> Int -> Bool move_left (i,j) mat cond | (j - 1) >= 0 && consult (i,j-1) mat /= cond = True | otherwise = False move_up :: (Int,Int) -> [[Int]] -> Int -> Bool move_up (i,j) mat cond | (i - 1) >= 0 && consult (i-1,j) mat /= cond = True | otherwise = False move_down :: (Int,Int) -> [[Int]] -> Int -> Bool move_down (i,j) mat cond | (i + 1) < number_of_rows mat && consult (i+1,j) mat /= cond = True | otherwise = False imp :: (Int,Int) -> Int imp (i,j) = i number_of_rows :: [[Int]] -> Int number_of_rows i = length i number_of_columns :: [[Int]] -> Int number_of_columns (x:xs) = length x consult :: (Int,Int) -> [[Int]] -> Int consult (i,j) l = (l !! i) !! j visited :: (Int,Int) -> [(Int,Int)] -> Bool visited x y = elem x y add :: (Int,Int) -> [(Int,Int)] -> [(Int,Int)] add x y = x:y visit :: (Int,Int) -> [(Int,Int)] -> [[Int]] -> Int -> [(Int,Int)] visit (i,j) vis mat cond | move_right (i,j) mat cond && not (visited (i,j+1) vis) = visit (i,j+1) (add (i,j+1) vis) mat cond | move_down (i,j) mat cond && not (visited (i+1,j) vis) = visit (i+1,j) (add (i+1,j) vis) mat cond | move_left (i,j) mat cond && not (visited (i,j-1) vis) = visit (i,j-1) (add (i,j-1) vis) mat cond | move_up (i,j) mat cond && not (visited (i-1,j) vis) = visit (i-1,j) (add (i-1,j) vis) mat cond | otherwise = vis

    Read the article

  • Using DateTime in a SqlParameter for Stored Procedure, format error

    - by Matt
    I'm trying to call a stored procedure (on a SQL 2005 server) from C#, .NET 2.0 using DateTime as a value to a SqlParameter. The SQL type in the stored procedure is 'datetime'. Executing the sproc from SQL Management Studio works fine. But everytime I call it from C# I get an error about the date format. When I run SQL Profiler to watch the calls, I then copy paste the exec call to see whats going on. These are my observations and notes about what I've attempted: 1) If I pass the DateTime in directly as a DateTime or converted to SqlDateTime, the field is surrounding by a PAIR of single quotes, such as @Date_Of_Birth=N''1/8/2009 8:06:17 PM'' 2) If I pass the DateTime in as a string, I only get the single quotes 3) Using SqlDateTime.ToSqlString() does not result in a UTC formatted datetime string (even after converting to universal time) 4) Using DateTime.ToString() does not result in a UTC formatted datetime string. 5) Manually setting the DbType for the SqlParameter to DateTime does not change the above observations. So, my questions then, is how on earth do I get C# to pass the properly formatted time in the SqlParameter? Surely this is a common use case, why is it so difficult to get working? I can't seem to convert DateTime to a string that is SQL compatable (e.g. '2009-01-08T08:22:45') EDIT RE: BFree, the code to actually execute the sproc is as follows: using (SqlCommand sprocCommand = new SqlCommand(sprocName)) { sprocCommand.Connection = transaction.Connection; sprocCommand.Transaction = transaction; sprocCommand.CommandType = System.Data.CommandType.StoredProcedure; sprocCommand.Parameters.AddRange(parameters.ToArray()); sprocCommand.ExecuteNonQuery(); } To go into more detail about what I have tried: parameters.Add(new SqlParameter("@Date_Of_Birth", DOB)); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime())); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime().ToString())); SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB.ToUniversalTime(); parameters.Add(param); SqlParameter param = new SqlParameter("@Date_Of_Birth", SqlDbType.DateTime); param.Value = new SqlDateTime(DOB.ToUniversalTime()); parameters.Add(param); parameters.Add(new SqlParameter("@Date_Of_Birth", new SqlDateTime(DOB.ToUniversalTime()).ToSqlString())); Additional EDIT The one I thought most likely to work: SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB; Results in this value in the exec call as seen in the SQL Profiler @Date_Of_Birth=''2009-01-08 15:08:21:813'' If I modify this to be @Date_Of_Birth='2009-01-08T15:08:21' It works, but it won't parse with pair of single quotes, and it wont convert to a datetime correctly with the space between the date and time and with the milliseconds on the end. Update and Success First and foremost, thank you everyone for the answers. I post this for the sake of completeness and accuracy on SO - because I certainly do not do it for my pride... I had copy/pasted the code above after the request from below. I trimmed things here and there to be concise. Turns out my problem was in the code I left out, which I'm sure any one of you would have spotted in an instant. I had wrapped my sproc calls inside a transaction. Turns out that I was simply not doing transaction.Commit()!!!!! I'm ashamed to say it, but there you have it. I still don't know what's going on with the syntax I get back from the profiler. A coworker watched with his own instance of the profiler from his computer, and it returned proper syntax. Watching the very SAME executions from my profiler showed the incorrect syntax. It acted as a red-herring, making me believe there was a query syntax problem instead of the much more simple and true answer, which was that I need to commit the transaction! I marked an answer below as correct, and threw in some up-votes on others because they did, after all, answer the question, even if they didn't fix my specific (brain lapse) issue. Thanks again for the help.

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >