Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 769/835 | < Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Another design-related C++ question

    - by Kotti
    Hi! I am trying to find some optimal solutions in C++ coding patterns, and this is one of my game engine - related questions. Take a look at the game object declaration (I removed almost everything, that has no connection with the question). // Abstract representation of a game object class Object : public Entity, IRenderable, ISerializable { // Object parameters // Other not really important stuff public: // @note Rendering template will never change while // the object 'lives' Object(RenderTemplate& render_template, /* params */) : /*...*/ { } private: // Object rendering template RenderTemplate render_template; public: /** * Default object render method * Draws rendering template data at (X, Y) with (Width, Height) dimensions * * @note If no appropriate rendering method overload is specified * for any derived class, this method is called * * @param Backend & b * @return void * @see */ virtual void Render(Backend& backend) const { // Render sprite from object's // rendering template structure backend.RenderFromTemplate( render_template, x, y, width, height ); } }; Here is also the IRenderable interface declaration: // Objects that can be rendered interface IRenderable { /** * Abstract method to render current object * * @param Backend & b * @return void * @see */ virtual void Render(Backend& b) const = 0; } and a sample of a real object that is derived from Object (with severe simplifications :) // Ball object class Ball : public Object { // Ball params public: virtual void Render(Backend& b) const { b.RenderEllipse(/*params*/); } }; What I wanted to get is the ability to have some sort of standard function, that would draw sprite for an object (this is Object::Render) if there is no appropriate overload. So, one can have objects without Render(...) method, and if you try to render them, this default sprite-rendering stuff is invoked. And, one can have specialized objects, that define their own way of being rendered. I think, this way of doing things is quite good, but what I can't figure out - is there any way to split the objects' "normal" methods (like Resize(...) or Rotate(...)) implementation from their rendering implementation? Because if everything is done the way described earlier, a common .cpp file, that implements any type of object would generally mix the Resize(...), etc methods implementation and this virtual Render(...) method and this seems to be a mess. I actually want to have rendering procedures for the objects in one place and their "logic implementation" - in another. Is there a way this can be done (maybe alternative pattern or trick or hint) or this is where all this polymorphic and virtual stuff sucks in terms of code placement?

    Read the article

  • Switching from MDI to SDI and Back Again

    - by Mike Hofer
    This sounds like it would be a simple task, but I'm running into some issues. I have some fairly straightforward code for my C# application: private void SwitchToSdi() { MainWindow mainWindow = GetMainWindow(); for (int index = mainWindow.MdiChildren.Length - 1; index >= 0; index--) { Form form = mainWindow.MdiChildren[index]; if (!(form is MainWindow)) { form.Visible = false; form.MdiParent = null; form.Visible = true; mainWindow.MdiChildren[index] = null; } } mainWindow.IsMdiContainer = false; } And then, private void SwitchToMdi() { MainWindow mainWindow = GetMainWindow(); mainWindow.IsMdiContainer = true; for (int index = Application.OpenForms.Count - 1; index >= 0; index--) { Form form = Application.OpenForms[index]; if (!(form is MainWindow)) { form.Visible = false; form.MdiParent = mainWindow; form.Visible = true; } } } Note that MainWindow is an MDI parent window, with its IsMdiContainer property set to True. The user can toggle between MDI and SDI in the Options dialog. That much works beautifully. If I switch to SDI, the new windows open up OUTSIDE the main window, which is great. Similarly, if I switch to MDI, they open up inside the container. However, I've noticed a few things. Open MDI child windows don't become SDI windows as I would have expected them to. Open SDI windows don't become MDI child windows as I would have expected them to. Even after I set IsMdiContainer to true in the call to SwitchToMdi(), if I try to open a new window, I get an exception that tells me that the main window isn't an MDI container. o_O Someone please throw me a bone here. This shouldn't be rocket science. But I'm not finding a whole lot of useful help out there on the Intarwebs (read: g00gle is fairly useless). Has anyone actually implemented this behavior in .NET before? How did you achieve it? What am I missing here? Halp!

    Read the article

  • Adding rows to a data-bound DataGridView [Winforms]

    - by Mishko
    I want to bind a table from a database to a DataGridView, but I want to also add one more row with a sum of the values in the columns with indexes 3,4,7,8,9... How can I do that? Thanks! DataTable table1 = new DataTable(); double brutoUkupno1 = 0; double porezUkupno1 = 0; double doprinosUkupno1 = 0; double netoUkupno1 = 0; double doprinosTeretUkupno1 = 0; double topliObrokUkupno1 = 0; double regresUkupno1 = 0; Connection con = new Connection(); table1 = con.boundTable(month, Convert.ToInt32(year)); //This is method which returns DataTable table1.Rows.Add(null, null, null, null, null, null, null, null, null, null, null, null, null, null); table1.Rows.Add(null, null, null, null, null, null, null, null, null, null, null, null, null, null); dgv2.Visible = true; dgv2.DataSource = table1; for (int i = 0; i < dgv2.RowCount - 2; i++) { topliObrokUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[7].Value); regresUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[8].Value); brutoUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[9].Value); porezUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[10].Value); doprinosUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[11].Value); netoUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[12].Value); doprinosTeretUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[13].Value); //Now I am having problems with this below, putting things above to dgv2 : } dgv2.Rows[dgv2.Rows.Count - 1].Cells[0].Value = "Ukupno"; dgv2.Rows[dgv2.Rows.Count - 1].Cells[3].Value = month.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[4].Value = year.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[7].Value = topliObrokUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[8].Value = regresUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[9].Value = brutoUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[10].Value = porezUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[11].Value = doprinosUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[12].Value = netoUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[13].Value = doprinosTeretUkupno1.ToString(); dgv2.Rows[dgv2.RowCount - 2].Height = 3; dgv2.Rows[dgv2.RowCount - 2].DefaultCellStyle.BackColor = Color.Black;

    Read the article

  • Why does filter: blur(0) still cause text to blur under Webkit?

    - by johnkavanagh
    I've come across a bug today that's taken far longer than I would like to admit to identify. Essentially: setting a filter: blur(0) (or the vendor-specific -webkit-filter) on an element should - I believe - mean that no form of blur is applied. However, having tested this today, it would appear that Webkit based browsers still blur the text within any element with either blur(0) or blur(0px) assigned to it. I've knocked together a quick Fiddle here: http://jsfiddle.net/f9rBE/ These are three identical dixs containing text (no custom fonts): This has absolutely nothing assigned Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam facilisis orci in quam venenatis, in tempus ipsum sagittis. Suspendisse potenti. Donec ullamcorper lacus vel odio accumsan, vel aliquam libero tempor. Praesent nec libero venenatis, ultrices arcu non, luctus quam. Morbi scelerisque sit amet turpis sit amet tincidunt. Praesent semper erat non purus pretium consequat. Aenean et iaculis turpis. Curabitur diam tellus, consectetur non massa et, commodo venenatis metus. One has no styles at all assigned, the other two have blur(0) and blur(0px): .no-blur{} .zero-px-blur{ -webkit-filter: blur(0px); -moz-filter: blur(0px); -o-filter: blur(0px); -ms-filter: blur(0px); filter: blur(0px); } .zero-blur{ -webkit-filter: blur(0); -moz-filter: blur(0); -o-filter: blur(0); -ms-filter: blur(0); filter: blur(0); } If you preview this under Chrome/Safari you'll see that the text in the second two are still blurred: A few things worth noting: This unintentional blurring occurs in Safari on iOS7 devices (both iPhones and iPads); It also occurs on Chrome and Safari under OSX; It doesn't happen under FireFox in OSX. Of course, this isn't supported at all in Firefox just yet so it's hard to tell whether the behaviour I'm seeing is intentional/expected behaviour, or whether this is a bug in Webkit? Is it possible that this is only prevalent in higher-density resolution devices (ie: retina MacBook/iPhone/iPad)? With this in mind, how do you actually overwrite an item that has blur applied to it to set it back to non-blurred?

    Read the article

  • How can we implement change notification propagation for WPF and SL in the MVVM pattern?

    - by Firoso
    Here's an example scenario targetting MVVM WPF/SL development: View data binds to view model Property "Target" "Target" exposes a field of an object called "data" that exists in the local application model, called "Original" when "Original" changes, it should raise notification to the view model and then propogate that change notification to the View. Here are the solutions I've come up with, but I don't like any of them all that much. I'm looking for other ideas, by the time we come up with something rock solid I'm certain Microsoft will have released .NET 5 with WPF/SL extensions for better tools for MVVM development. For now the question is, "What have you done to solve this problem and how has it worked out for you?" Option 1. Proposal: Attach a handler to data's PropertyChanged event that watches for string values of properties it cares about that might have changed, and raises the appropriate notification. Why I don't like it: Changes don't bubble naturally, objects must be explicitly watched, if data changes to a new source, events must be un-registered/registered. Why I kind of like it: I get explicit control over propogation of changes, and I don't have to use any types that belong at a higher level of the application such as dependancy properties. Option 2. Proposal: Attach a handler to data's PropertyChanged event that re-raises the event across all properties using the name property name. Why I don't like it: This is essentially the same as option 1, but less intelligent, and forces me to never change my property names, as they have to be the same as the property names on data Why I kind of like it: It's very easy to set up and I don't have to think about it... Then again if I try to think, and change names to things that make sense, I shoot myself in the foot, and then I have to think about it! Option 3. Proposal: Inherit my view model from dependancy object, and notify binding sources of changes directly. Why I don't like it: I'm not even 100% sure dependancy properties/objects can DO this, it was just a thought to look into. Also I don't personally feel that WPF/SL types like Dep Obj belong at the view model level. Why I kind of like it: IF it has the capability that I'm seeking then it's a good answer! minus that pesky layering issue. Option 4. Proposal: Use a consistant agent messaging system based off of Task Parallels DataFlow Library to propogate everything through linked pipelining. Why I don't like it: I've never tried this, and somehow I think it will be lacking, plus it requires me to think about my code completely differently all the way around. Why I kind of like it: It has the possiblity of allowing me to do some VERY fun manipulations with a push based data model and using ActionBlocks as validation AND setters to then privately change view model properties and explicitly control PropertyChanged notifications.

    Read the article

  • how to pass parameter to a webservice using ksoap2?

    - by user255681
    hi there, i'm using eclipse to develop over android, i'm trying to connect to a .net webservice... when i'm calling a webmethod with no parameters it works fine... but when i come to pass a parameter to the webmethod things turn upside down... the parameter is passed as null (while debugging the webservice i discovered that) and i get a null from the webmethod in the client side code... i've been searching for a solution for a day now and all that i can interpreter is that people keep talking about encoding styles and such stuff.... i've tried it all but in vain. i'm using ksoap2 version 2.3 with the following code package com.examples.hello; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.PropertyInfo; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.HttpTransportSE; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class HelloActivity extends Activity { /** Called when the activity is first created. */ private static final String SOAP_ACTION = "http://Innovation/HRService/stringBs"; private static final String METHOD_NAME = "stringBs"; private static final String NAMESPACE = "http://Innovation/HRService/"; private static final String URL = "http://196.205.5.170/mdl/hrservice.asmx"; TextView tv; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); tv=(TextView)findViewById(R.id.text1); call(); } public void call() { try { SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); //PropertyInfo PI = new PropertyInfo(); //request.addProperty("a", "myprop"); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.setOutputSoapObject(request); envelope.dotNet=true; envelope.encodingStyle = SoapSerializationEnvelope.XSD; HttpTransportSE androidHttpTransport = new HttpTransportSE(URL); androidHttpTransport.call(SOAP_ACTION, envelope); Object result = (Object)envelope.getResponse(); String results = result.toString(); tv.setText( ""+results); } catch (Exception e) { tv.setText(e.getMessage()); } } }

    Read the article

  • Best practices for sending automated daily emails from web service

    - by Tauren
    I am running a web service that currently sends confirmation emails out to new users via the gmail smtp servers. As I'm only getting a few new users each day, this hasn't been a problem. I've recently added new features to the webapp that will require a customized message to be sent out to each user every day. Think of this as similar to the regular messages LinkedIn sends out that give you a status report on the activity in your network. Every user's message will be different. With thousands of users, this means thousands of unique messages will be sent each day. Edit: I've since found that these types of email are called "transactional or relationship messages". Spamtacular has a good article on differentiating between marketing and transactional email. I don't think using gmail's smtp servers will cut it anymore, but I don't know that for sure. I don't know what gmail's maximum outgoing messages per account is (it might be 100/day), but they limit outgoing mail to 500 recipients per message. I'm not sending a single message to 500 recipients, but I'm going to be sending 1000's of customized messages with each recipient getting one per day. I'm interested to learn any best practices for doing this (especially for Java-based webapps). Here are some of my thoughts and concerns on it: Should I set up my own outgoing mail server? If I do this, it seems like I'll have all sorts of other issues to worry about, such as preventing mail server abuse, monitoring bounces, allowing ways to opt-out of emails, etc. Are there any tools or services to help with this? Maybe something like OpenEMM or a services like MailChimp? But those seem focused more toward email marketing campaigns. I don't think I should have the webapp itself handle sending emails as it currently is for new user signups. I'm thinking I should setup a separate messaging server that can access the same backend/datastore as the webapp. Thoughts on this? Should I consider setting up some sort of message queueing service to help with this, such as JMS, RabbitMQ, ActiveMQ, etc.? Do I need to provide users a way to opt-out? Do I need to flag these as bulk messages? I don't really consider these email marketing messages, but I'm unsure what is considered appropriate or proper netiquette. Any advice is appreciated. I'm also very interested in open source tools or web services that simplify things and could help me to ramp up as quickly as possible. Thanks!

    Read the article

  • Asynchronous readback from opengl front buffer using multiple PBO's

    - by KillianDS
    I am developing an application that needs to read back the whole frame from the front buffer of an openGL application. I can hijack the application's opengl library and insert my code on swapbuffers. At the moment I am successfully using a simple but excruciating slow glReadPixels command without PBO's. Now I read about using multiple PBO's to speed things up. While I think I've found enough resources to actually program that (isn't that hard), I have some operational questions left. I would do something like this: create a series (e.g. 3) of PBO's use glReadPixels in my swapBuffers override to read data from front buffer to a PBO (should be fast and non-blocking, right?) Create a seperate thread to call glMapBufferARB, once per PBO after a glReadPixels, because this will block until the pixels are in client memory. Process the data from step 3. Now my main concern is of course in steps 2 and 3. I read about glReadPixels used on PBO's being non-blocking, will this be an issue if I issue new opengl commands after that very fast? Will those opengl commands block? Or will they continue (my guess), and if so, I guess only swapbuffers can be a problem, will this one stall or will glReadPixels from front buffer be many times faster than swapping (about each 15-30ms) or, worst case scenario, will swapbuffers be executed while glReadPixels is still reading data to the PBO? My current guess is this logic will do something like this: copy FRONT_BUFFER - generic place in VRAM, copy VRAM-RAM. But I have no idea which of those 2 is the real bottleneck and more, what the influence on the normal opengl command stream is. Then in step 3. Is it wise to do this asynchronously in a thread separated from normal opengl logic? At the moment I think not, It seems you have to restore buffer operations to normal after doing this and I can't install synchronization objects in the original code to temporarily block those. So I think my best option is to define a certain swapbuffer delay before reading them out, so e.g. calling glReadPixels on PBO i%3 and glMapBufferARB on PBO (i+2)%3 in the same thread, resulting in a delay of 2 frames. Also, when I call glMapBufferARB to use data in client memory, will this be the bottleneck or will glReadPixels (asynchronously) be the bottleneck? And finally, if you have some better ideas to speed up frame readback from GPU in opengl, please tell me, because this is a painful bottleneck in my current system. I hope my question is clear enough, I know the answer will probably also be somewhere on the internet but I mostly came up with results that used PBO's to keep buffers in video memory and do processing there. I really need to read back the front buffer to RAM and I do not find any clear explanations about performance in that case (which I need, I cannot rely on "it's faster", I need to explain why it's faster). Thank you

    Read the article

  • Asynchronous sockets in C#

    - by IVlad
    I'm confused about the correct way of using asynchronous socket methods in C#. I will refer to these two articles to explain things and ask my questions: MSDN article on asynchronous client sockets and devarticles.com article on socket programming. My question is about the BeginReceive() method. The MSDN article uses these two functions to handle receiving data: private static void Receive(Socket client) { try { // Create the state object. StateObject state = new StateObject(); state.workSocket = client; // Begin receiving the data from the remote device. client.BeginReceive( state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(ReceiveCallback), state); } catch (Exception e) { Console.WriteLine(e.ToString()); } } private static void ReceiveCallback( IAsyncResult ar ) { try { // Retrieve the state object and the client socket // from the asynchronous state object. StateObject state = (StateObject) ar.AsyncState; Socket client = state.workSocket; // Read data from the remote device. int bytesRead = client.EndReceive(ar); if (bytesRead > 0) { // There might be more data, so store the data received so far. state.sb.Append(Encoding.ASCII.GetString(state.buffer,0,bytesRead)); // Get the rest of the data. client.BeginReceive(state.buffer,0,StateObject.BufferSize,0, new AsyncCallback(ReceiveCallback), state); } else { // All the data has arrived; put it in response. if (state.sb.Length > 1) { response = state.sb.ToString(); } // Signal that all bytes have been received. receiveDone.Set(); } } catch (Exception e) { Console.WriteLine(e.ToString()); } } While the devarticles.com tutorial passes null for the last parameter of the BeginReceive method, and goes on to explain that the last parameter is useful when we're dealing with multiple sockets. Now my questions are: What is the point of passing a state to the BeginReceive method if we're only working with a single socket? Is it to avoid using a class field? It seems like there's little point in doing it, but maybe I'm missing something. How can the state parameter help when dealing with multiple sockets? If I'm calling client.BeginReceive(...), won't all the data be read from the client socket? The devarticles.com tutorial makes it sound like in this call: m_asynResult = m_socClient.BeginReceive (theSocPkt.dataBuffer,0,theSocPkt.dataBuffer.Length, SocketFlags.None,pfnCallBack,theSocPkt); Data will be read from the theSocPkt.thisSocket socket, instead of from the m_socClient socket. In their example the two are one and the same, but what happens if that is not the case? I just don't really see where that last argument is useful or at least how it helps with multiple sockets. If I have multiple sockets, I still need to call BeginReceive on each of them, right?

    Read the article

  • Project References DLL version hell

    - by Mr Shoubs
    We're having problems getting visual studio to pick up the latest version of a DLL from one of our projects. We have multiple class library projects (e.g. BusinessLogic, ReportData) and a number of web services, each has a reference to a Connectivity DLL we've written (this ref to the connectivity DLL is the problem). We always point references to the DLL in the bin/debug folder, (which is where we always build to for any given project) and all custom DLL references have CopyLocal = True and SpecificVersion = False ReportData has a reference to business logic (which also has a reference to connectivity - I don't see why this should cause a problem, but thought it is worth mentioning) The weird thing is, when you click "Add Reference" and browse to Connectivity/bin/debug - you hover the mouse over the DLL file, the correct (latest) version is shown (version and file version are always incremented together), but when you click ok, a previous version number is pulled though. Even when I look in the current projects debug folder (where copy local would put the DLL after compiling) that shows the latest version number. - NO WHERE does can I find the previous version of the DLL outside of visual studio, but in that project references it has the old version - even though the path is correct. I'm at a loss as to where it might be getting the old versions from. Or even why it wants that one. This is possibly the most frustraighting problem I have ever come across. Does anyone know how to ensure the latest version is pulled through (preferably automatically or on compile). EDIT: Although not exactly the scenario I'm dealing with I was reading this article and somewhere it mentions about CLR ignoring revision numbers. Understandable (even though this hasn't been a problem before - we're on revision 39), so I thought I would update the build number, still didn't work. In a vain attempt I though I would update the minor version number and see if that made any difference. I'm not saying this is the answer as I have to check quite a few things first, but on the face of it, this seems to have solved my problem... Further edit: In other class libraries this seems to have solved the problem, however in a test windows application it still pulls a previous version through :( If I increment the minor version number again, the same problem come back and I am left with the wrong version being pulled though. Further Edit - I created an entirly new project, added a reference and still had the exact same problem. This suggests the problem is restriced to the project I am referencing. Wish I knew why! Anyone had this problem before and know how to get around it? HELP!

    Read the article

  • Slow performance when utilizing Interop.DSOFile to search files by Custom Document Property

    - by Gradatc
    I am new to the world of VB.NET and have been tasked to put together a little program to search a directory of about 2000 Excel spreadsheets and put together a list to display based on the value of a Custom Document Property within that spreadsheet file. Given that I am far from a computer programmer by education or trade, this has been an adventure. I've gotten it to work, the results are fine. The problem is, it takes well over a minute to run. It is being run over a LAN connection. When I run it locally (using a 'test' directory of about 300 files) it executes in about 4 seconds. I'm not sure even what to expect as a reasonable execution speed, so I thought I would ask here. The code is below, if anyone thinks changes there might be of use in speeding things up. Thank you in advance! Private Sub listByPt() Dim di As New IO.DirectoryInfo(dir_loc) Dim aryFiles As IO.FileInfo() = di.GetFiles("*" & ext_to_check) Dim fi As IO.FileInfo Dim dso As DSOFile.OleDocumentProperties Dim sfilename As String Dim sheetInfo As Object Dim sfileCount As String Dim ifilesDone As Integer Dim errorList As New ArrayList() Dim ErrorFile As Object Dim ErrorMessage As String 'Initialize progress bar values ifilesDone = 0 sfileCount = di.GetFiles("*" & ext_to_check).Length Me.lblHighProgress.Text = sfileCount Me.lblLowProgress.Text = 0 With Me.progressMain .Maximum = di.GetFiles("*" & ext_to_check).Length .Minimum = 0 .Value = 0 End With 'Loop through all files in the search directory For Each fi In aryFiles dso = New DSOFile.OleDocumentProperties sfilename = fi.FullName Try dso.Open(sfilename, True) 'grab the PT Initials off of the logsheet Catch excep As Runtime.InteropServices.COMException errorList.Add(sfilename) End Try Try sheetInfo = dso.CustomProperties("PTNameChecker").Value Catch ex As Runtime.InteropServices.COMException sheetInfo = "NONE" End Try 'Check to see if the initials on the log sheet 'match those we are searching for If sheetInfo = lstInitials.SelectedValue Then Dim logsheet As New LogSheet logsheet.PTInitials = sheetInfo logsheet.FileName = sfilename PTFiles.Add(logsheet) End If 'update progress bar Me.progressMain.Increment(1) ifilesDone = ifilesDone + 1 lblLowProgress.Text = ifilesDone dso.Close() Next lstResults.Items.Clear() 'loop through results in the PTFiles list 'add results to the listbox, removing the path info For Each showsheet As LogSheet In PTFiles lstResults.Items.Add(Path.GetFileNameWithoutExtension(showsheet.FileName)) Next 'build error message to display to user ErrorMessage = "" For Each ErrorFile In errorList ErrorMessage += ErrorFile & vbCrLf Next MsgBox("The following Log Sheets were unable to be checked" _ & vbCrLf & ErrorMessage) PTFiles.Clear() 'empty PTFiles for next use End Sub

    Read the article

  • Ajax comments form in ASP.NET MVC2, howto?

    - by Artiom Chilaru
    I've been playing around with different aspects of MVC for some time now, and I've reached a situation where I'm not sure what would be the best way to solve a problem. I'm hoping that the SO community will help me out here :P I've seen a number of examples of Ajax.BeginForm on the internet, and it seems like a very nifty idea. E.g. you have a dropdown where you select a customer - and on selecting one it will load this client's details in some placeholder on the page. This works perfectly fine. But what to do if you want to tie in some validation in the box? Just hypothetically, imagine an article page, and user comments in the bottom. Below the comments area there's an ajax-y "Add comment" box. When a user adds a comment, it will appear in the comments area, below the last comment there. If I set the Ajax.BeginForm to Append the result of the call to the Comments area, it will work fine. But what if the data posted is not valid? Instead of appending a "successful" comment to the comments area I have to show the user validation errors. At this point I decided that the area INSIDE the Ajax.BeginForm will be inside a partial, and the form's submits will return this partial. Validation works fine. On each submit we reload the contents inside the form element. But how to add the successful comment to the top? Other things to consider: The comment form also has a "Preview" button. When the user clicks on Preview, I should load the rendered comment into a preview box. This will probably be inside the form area as well. I was thinking of using Json results instead. When the user submits the form, the server code will generate a Json object with a Success value, and html rendered partials as some properties. Something like { "success": true, "form": "<html form data>", "comment": "successful comment html to inject into the page" } This would be a perfect solution, except there's no way in MVC to render a partial into a string, inside the controller (separation of context, remember?). So.. what should I do then? Any "correct" way to implement this? Anyone???

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Does weak typing offer any advantages?

    - by sub
    Don't confuse this with static vs. dynamic typing! You all know JavaScripts/PHPs infamous type systems: PHP example: echo "123abc"+2; // 125 - the reason for this is explained // in the PHP docs but still: This hurts echo "4"+1; // 5 - Oh please echo "ABC"*5; // 0 - WTF // That's too much, seriously now. // This here might be actually a use for weak typing, but no - // it has to output garbage. JavaScript example: // A good old JavaScript, maybe you'll do better? alert("4"+1); // 51 - Oh come on. alert("abc"*3); // NaN - What the... // Have your creators ever heard of the word "consistence"? Python example: # Python's type system is actually a mix # It spits errors on senseless things like the first example below AND # allows intelligent actions like the second example. >>> print("abc"+1) Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> print("abc"+1) TypeError: Can't convert 'int' object to str implicitly >>> print("abc"*5) abcabcabcabcabc Ruby example: puts 4+"1" // Type error - as supposed puts "abc"*4 // abcabcabcabc - makes sense After these examples it should be clear that PHP/JavaScript probably have the most inconsistent type systems out there. This is a fact and really not subjective. Now when having a closer look at the type systems of Ruby and Python it seems like they are having a much more intelligent and consistent type system. I think these examples weren't really necessary as we all know that PHP/JavaScript have a weak and Python/Ruby have a strong type system. I just wanted to mention why I'm asking this. Now I have two questions: When looking at those examples, what are the advantages of PHPs and JavaScripts type systems? I can only find downsides: They are inconsistent and I think we know that this is not good Types conversions are hardly controllable Bugs are more likely to happen and much harder to spot Do you prefer one of the both systems? Why? Personally I have worked with PHP, JavaScript and Python so far and must say that Pythons type system has really only advantages over PHPs and JavaScripts. Does anybody here not think so? Why then?

    Read the article

  • Run bat file in Java and wait 2

    - by Savvas Dalkitsis
    This is a followup question to my other question : http://stackoverflow.com/questions/2434125/run-bat-file-in-java-and-wait The reason i am posting this as a separate question is that the one i already asked was answered correctly. From some research i did my problem is unique to my case so i decided to create a new question. Please go read that question before continuing with this one as they are closely related. Running the proposed code blocks the program at the waitFor invocation. After some research i found that the waitFor method blocks if your process has output that needs to be proccessed so you should first empty the output stream and the error stream. I did those things but my method still blocks. I then found a suggestion to simply loop while waiting the exitValue method to return the exit value of the process and handle the exception thrown if it is not, pausing for a brief moment as well so as not to consume all the CPU. I did this: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Test { public static void main(String[] args) { try { Process p = Runtime.getRuntime().exec( "cmd /k start SQLScriptsToRun.bat" + " -UuserName -Ppassword" + " projectName"); final BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); final BufferedReader error = new BufferedReader(new InputStreamReader(p.getErrorStream())); new Thread(new Runnable() { @Override public void run() { try { while (input.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); new Thread(new Runnable() { @Override public void run() { try { while (error.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); int i = 0; boolean finished = false; while (!finished) { try { i = p.exitValue(); finished = true; } catch (IllegalThreadStateException e) { e.printStackTrace(); try { Thread.sleep(500); } catch (InterruptedException e1) { e1.printStackTrace(); } } } System.out.println(i); } catch (IOException e) { e.printStackTrace(); } } } but my process will not end! I keep getting this error: java.lang.IllegalThreadStateException: process has not exited Any ideas as to why my process will not exit? Or do you have any libraries to suggest that handle executing batch files properly and wait until the execution is finished?

    Read the article

  • Jquery Cycle Plugin Question - turning off relative links to photos so it goes to a URL

    - by alpdog14
    I am using Jquery Cycle Plugin and it has a side panel that highlights as the photos change and currently when I click on the text the associated photo pulls up then I have to click on the photo to go to the URL but I would like the text itself to link to the URL. I have looked at the fn.cycle.defaults but not sure what to change and I tried a few things but nothing works. If anyone can help me figure this out it would be most helpful. Here are the fn.cycle.defaults: fx: 'fade', // one of: fade, shuffle, zoom, scrollLeft, etc timeout: 4000, // milliseconds between slide transitions (0 to disable auto advance) continuous: 0, // true to start next transition immediately after current one completes speed: 1000, // speed of the transition (any valid fx speed value) speedIn: null, // speed of the 'in' transition speedOut: null, // speed of the 'out' transition next: null, // id of element to use as click trigger for next slide prev: null, // id of element to use as click trigger for previous slide prevNextClick: null, // callback fn for prev/next clicks: function(isNext, zeroBasedSlideIndex, slideElement) pager: null, // id of element to use as pager container pagerClick: null, // callback fn for pager clicks: function(zeroBasedSlideIndex, slideElement) pagerEvent: null, // event which drives the pager navigation pagerAnchorBuilder: null, // callback fn for building anchor links before: null, // transition callback (scope set to element to be shown) after: null, // transition callback (scope set to element that was shown) end: null, // callback invoked when the slideshow terminates (use with autostop or nowrap options) easing: null, // easing method for both in and out transitions easeIn: null, // easing for "in" transition easeOut: null, // easing for "out" transition shuffle: null, // coords for shuffle animation, ex: { top:15, left: 200 } animIn: null, // properties that define how the slide animates in animOut: null, // properties that define how the slide animates out cssBefore: null, // properties that define the initial state of the slide before transitioning in cssAfter: null, // properties that defined the state of the slide after transitioning out fxFn: null, // function used to control the transition height: 'auto', // container height startingSlide: 0, // zero-based index of the first slide to be displayed sync: 1, // true if in/out transitions should occur simultaneously random: 0, // true for random, false for sequence (not applicable to shuffle fx) fit: 0, // force slides to fit container pause: true, // true to enable "pause on hover" autostop: 0, // true to end slideshow after X transitions (where X == slide count) autostopCount: 0, // number of transitions (optionally used with autostop to define X) delay: 0, // additional delay (in ms) for first transition (hint: can be negative) slideExpr: null, // expression for selecting slides (if something other than all children is required) cleartype: 0, // true if clearType corrections should be applied (for IE) nowrap: 0 // true to prevent slideshow from wrapping }; I have tried changing the pageClick and pagerEvent but nothing seems to be working. Please help!!!

    Read the article

  • Where does the version number come from?

    - by Robert Schneider
    I have a version control system (e.g. Subversion) and now I'd like to set up a build process. Now I have to create a version number and insert it into the system. But where does the version number come from and get into? Assume I want to use this common <major.<minor.<bugfix/revision scheme. Should I pass a number to the build script? Or should I pass arguments like increaseMajor, increaseMinor, increaseRevision? Or would you recommend to create a branch with the number which will be detected by the build script? I could imagine that the major and minor version number have to be put in manually somewhere. The revision number could be increased automaically. But still I don't know where I would place the major and minor number. In my case I have some php files that I would like to zip, but before I have to insert some version numbers into php file. I have edited this post to try to make my request clearer: I do not use Subversion, that was just an example. And I don't want to discuss the version number scheme. Imagine I want to create version 3.5.0 or 3.5.1. Would I pass this version number to a build script? Would the script create the branch in the repository with this number or would it expect that someone has already created this branch? Manually? Or would the build script look for name of the branch (e.g. '3.5.1) and use it for further things? And does the version number come from my brain or is it automatically created (I guess the major/minor number it comes from my little brain and revision number is created)? Or would you place the number into a file that may gets inserted into the repository? I guess if would use a release management tool I would insert the version number there. But I don't use one yet.

    Read the article

  • Safari/Chrome problem with ajaxsubmit?

    - by Jan
    Hi I'm currently having some weird issues with ajaxsubmit (http://jquery.malsup.com/form/#ajaxSubmit), which I'm currently using in a project. I have a flow where I need to open a form into a modal window. I'm using fancybox for that and it works like a charm. When the form has been forced to open in the fancybox window there can happen two things. 1) If the user who is about to submit the form is logged in she should see a confirmation in the modal box, that her input was succesfully submitted 2)If the user is not logged in there should be loaded a login form once she hits the submit button 2.1) When the user has logged in she should receive a confirmation in the modal box. This is also working like a charm in Firefox, IE8 and IE7 but not in Safari or Chrome. The weird part is that it seems like safari and chrome are completely ignoring my ajaxsubmit form. To force the first form to be opened I use the follwoing script - this part is working in both Safari and Chrome. $(".klikEnPrisForm").ajaxForm({ success: function(data){ $.fancybox({'content':data}); } }); My ajaxsubmit form scrip looks like this var options = { url: '/?altTemplate=XmlProxyKlikEnPris', dataType: 'xml', data: $(this).serializeArray(), success: function(data) { if ($(data).find('loggetind').text() == 'true') { $("#klikenpris").hide(); $('<div id="fancybox-inner-klik"></div>').appendTo('#fancybox-inner'); $('#fancybox-inner-klik').load('/KlikEnPrisAccept?tilKvittering=1&sagsno=' + $(data).find('sagsnummer').text() + '&pris=' + $(data).find('pris').text() + '&klik-comment=' + $(data).find('kommentar').text() + '&klik-telefon=' + $(data).find('tlf').text() + '&klik-maeglerkontakt=' + $(data).find('maakontakte').text()).stop(true, true); } else { $("#klikenpris").hide(); $("#fancybox-wrap").css({ 'width': '480px', 'height': '220px' }); $("#fancybox-inner").css({ 'width': '460px', 'height': '220px' }); $('<div id="fancybox-inner-klik"></div>').appendTo('#fancybox-inner'); $('#fancybox-inner-klik').load('/login.aspx?loginklikpris=0&klikpris=1&sagsno=' + $(data).find('sagsnummer').text() + '&pris=' + $(data).find('pris').text() + '&klik-comment=' + $(data).find('kommentar').text() + '&klik-telefon=' + $(data).find('tlf').text() + '&klik-maeglerkontakt=' + $(data).find('maakontakte').text()).stop(true, true); } } }; // bind to the form's submit event $('#klikenprisform').submit(function() { // inside event callbacks 'this' is the DOM element so we first // wrap it in a jQuery object and then invoke ajaxSubmit $(this).ajaxSubmit(options); // !!! Important !!! // always return false to prevent standard browser submit and page navigation return false; }); I have tried inserting an alert in the succes callback function but it's never being called it seems. It seems like the default action is not being overruled by the link written in the "url" in ajaxsubmit. I'm really puzzled about this, since it's working nicely in other browsers and I'm completely lost on how I should approach the debugging in safari/chrome. I hope all the above makes sense and I'm looking forward to hear any suggestions. Cheers!

    Read the article

  • Asp.net MVC VirtualPathProvider views parse error

    - by madcapnmckay
    Hi, I am working on a plugin system for Asp.net MVC 2. I have a dll containing controllers and views as embedded resources. I scan the plugin dlls for controller using StructureMap and I then can pull them out and instantiate them when requested. This works fine. I then have a VirtualPathProvider which I adapted from this post public class AssemblyResourceProvider : VirtualPathProvider { protected virtual string WidgetDirectory { get { return "~/bin"; } } private bool IsAppResourcePath(string virtualPath) { var checkPath = VirtualPathUtility.ToAppRelative(virtualPath); return checkPath.StartsWith(WidgetDirectory, StringComparison.InvariantCultureIgnoreCase); } public override bool FileExists(string virtualPath) { return (IsAppResourcePath(virtualPath) || base.FileExists(virtualPath)); } public override VirtualFile GetFile(string virtualPath) { return IsAppResourcePath(virtualPath) ? new AssemblyResourceVirtualFile(virtualPath) : base.GetFile(virtualPath); } public override CacheDependency GetCacheDependency(string virtualPath, IEnumerable virtualPathDependencies, DateTime utcStart) { return IsAppResourcePath(virtualPath) ? null : base.GetCacheDependency(virtualPath, virtualPathDependencies, utcStart); } } internal class AssemblyResourceVirtualFile : VirtualFile { private readonly string path; public AssemblyResourceVirtualFile(string virtualPath) : base(virtualPath) { path = VirtualPathUtility.ToAppRelative(virtualPath); } public override Stream Open() { var parts = path.Split('/'); var resourceName = Path.GetFileName(path); var apath = HttpContext.Current.Server.MapPath(Path.GetDirectoryName(path)); var assembly = Assembly.LoadFile(apath); return assembly != null ? assembly.GetManifestResourceStream(assembly.GetManifestResourceNames().SingleOrDefault(s => string.Compare(s, resourceName, true) == 0)) : null; } } The VPP seems to be working fine also. The view is found and is pulled out into a stream. I then receive a parse error Could not load type 'System.Web.Mvc.ViewUserControl<dynamic>'. which I can't find mentioned in any previous example of pluggable views. Why would my view not compile at this stage? Thanks for any help, Ian EDIT: Getting closer to an answer but not quite clear why things aren't compiling. Based on the comments I checked the versions and everything is in V2, I believe dynamic was brought in at V2 so this is fine. I don't even have V3 installed so it can't be that. I have however got the view to render, if I remove the <dynamic> altogether. So a VPP works but only if the view is not strongly typed or dynamic This makes sense for the strongly typed scenario as the type is in the dynamically loaded dll so the viewengine will not be aware of it, even though the dll is in the bin. Is there a way to load types at app start? Considering having a go with MEF instead of my bespoke Structuremap solution. What do you think?

    Read the article

  • Whats wrong with my triple DES wrapper??

    - by Chen Kinnrot
    it seems that my code adds 6 bytes to the result file after encrypt decrypt is called.. i tries it on a mkv file.. please help here is my code class TripleDESCryptoService : IEncryptor, IDecryptor { public void Encrypt(string inputFileName, string outputFileName, string key) { EncryptFile(inputFileName, outputFileName, key); } public void Decrypt(string inputFileName, string outputFileName, string key) { DecryptFile(inputFileName, outputFileName, key); } static void EncryptFile(string inputFileName, string outputFileName, string sKey) { var outFile = new FileStream(outputFileName, FileMode.OpenOrCreate, FileAccess.ReadWrite); // The chryptographic service provider we're going to use var cryptoAlgorithm = new TripleDESCryptoServiceProvider(); SetKeys(cryptoAlgorithm, sKey); // This object links data streams to cryptographic values var cryptoStream = new CryptoStream(outFile, cryptoAlgorithm.CreateEncryptor(), CryptoStreamMode.Write); // This stream writer will write the new file var encryptionStream = new BinaryWriter(cryptoStream); // This stream reader will read the file to encrypt var inFile = new FileStream(inputFileName, FileMode.Open, FileAccess.Read); var readwe = new BinaryReader(inFile); // Loop through the file to encrypt, line by line var date = readwe.ReadBytes((int)readwe.BaseStream.Length); // Write to the encryption stream encryptionStream.Write(date); // Wrap things up inFile.Close(); encryptionStream.Flush(); encryptionStream.Close(); } private static void SetKeys(SymmetricAlgorithm algorithm, string key) { var keyAsBytes = Encoding.ASCII.GetBytes(key); algorithm.IV = keyAsBytes.Take(algorithm.IV.Length).ToArray(); algorithm.Key = keyAsBytes.Take(algorithm.Key.Length).ToArray(); } static void DecryptFile(string inputFilename, string outputFilename, string sKey) { // The encrypted file var inFile = File.OpenRead(inputFilename); // The decrypted file var outFile = new FileStream(outputFilename, FileMode.OpenOrCreate, FileAccess.ReadWrite); // Prepare the encryption algorithm and read the key from the key file var cryptAlgorithm = new TripleDESCryptoServiceProvider(); SetKeys(cryptAlgorithm, sKey); // The cryptographic stream takes in the encrypted file var encryptionStream = new CryptoStream(inFile, cryptAlgorithm.CreateDecryptor(), CryptoStreamMode.Read); // Write the new unecrypted file var cleanStreamReader = new BinaryReader(encryptionStream); var cleanStreamWriter = new BinaryWriter(outFile); cleanStreamWriter.Write(cleanStreamReader.ReadBytes((int)inFile.Length)); cleanStreamWriter.Close(); outFile.Close(); cleanStreamReader.Close(); } }

    Read the article

  • How to implement a SIMPLE "You typed ACB, did you mean ABC?"

    - by marcgg
    I know this is not a straight up question, so if you need me to provide more information about the scope of it, let me know. There are a bunch of questions that address almost the same issue (they are linked here), but never the exact same one with the same kind of scope and objective - at least as far as I know. Context: I have a MP3 file with ID3 tags for artist name and song title. I have two tables Artists and Songs The ID3 tags might be slightly off (e.g. Mikaell Jacksonne) I'm using ASP.NET + C# and a MSSQL database I need to synchronize the MP3s with the database. Meaning: The user launches a script The script browses through all the MP3s The script says "Is 'Mikaell Jacksonne' 'Michael Jackson' YES/NO" The user pick and we start over Examples of what the system could find: In the database... SONGS = {"This is a great song title", "This is a song title"} ARTISTS = {"Michael Jackson"} Outputs... "This is a grt song title" did you mean "This is a great song title" ? "This is song title" did you mean "This is a song title" ? "This si a song title" did you mean "This is a song title" ? "This si song a title" did you mean "This is a song title" ? "Jackson, Michael" did you mean "Michael Jackson" ? "JacksonMichael" did you mean "Michael Jackson" ? "Michael Jacksno" did you mean "Michael Jackson" ? etc. I read some documentation from this /how-do-you-implement-a-did-you-mean and this is not exactly what I need since I don't want to check an entire dictionary. I also can't really use a web service since it's depending a lot on what I already have in my database. If possible I'd also like to avoid dealing with distances and other complicated things. I could use the google api (or something similar) to do this, meaning that the script will try spell checking and test it with the database, but I feel there could be a better solution since my database might end up being really specific with weird songs and artists, making spell checking useless. I could also try something like what has been explained on this post, using Soundex for c#. Using a regular spell checker won't work because I won't be using words but names and 'titles'. So my question is: is there a relatively simple way of doing this, and if so, what is it? Any kind of help would be appreciated. Thanks!

    Read the article

  • Specialization of template after instantiation?

    - by Onur Cobanoglu
    Hi, My full code is too long, but here is a snippet that will reflect the essence of my problem: class BPCFGParser { public: ... ... class Edge { ... ... }; class ActiveEquivClass { ... ... }; class PassiveEquivClass { ... ... }; struct EqActiveEquivClass { ... ... }; struct EqPassiveEquivClass { ... ... }; unordered_map<ActiveEquivClass, Edge *, hash<ActiveEquivClass>, EqActiveEquivClass> discovered_active_edges; unordered_map<PassiveEquivClass, Edge *, hash<PassiveEquivClass>, EqPassiveEquivClass> discovered_passive_edges; }; namespace std { template <> class hash<BPCFGParser::ActiveEquivClass> { public: size_t operator()(const BPCFGParser::ActiveEquivClass & aec) const { } }; template <> class hash<BPCFGParser::PassiveEquivClass> { public: size_t operator()(const BPCFGParser::PassiveEquivClass & pec) const { } }; } When I compile this code, I get the following errors: In file included from BPCFGParser.cpp:3, from experiments.cpp:2: BPCFGParser.h:408: error: specialization of ‘std::hash<BPCFGParser::ActiveEquivClass>’ after instantiation BPCFGParser.h:408: error: redefinition of ‘class std::hash<BPCFGParser::ActiveEquivClass>’ /usr/include/c++/4.3/tr1_impl/functional_hash.h:44: error: previous definition of ‘class std::hash<BPCFGParser::ActiveEquivClass>’ BPCFGParser.h:445: error: specialization of ‘std::hash<BPCFGParser::PassiveEquivClass>’ after instantiation BPCFGParser.h:445: error: redefinition of ‘class std::hash<BPCFGParser::PassiveEquivClass>’ /usr/include/c++/4.3/tr1_impl/functional_hash.h:44: error: previous definition of ‘class std::hash<BPCFGParser::PassiveEquivClass>’ Now I have to specialize std::hash for these classes (because standard std::hash definition does not include user defined types). When I move these template specializations before the definition of class BPCFGParser, I get a variety of errors for a variety of different things tried, and somewhere (http://www.parashift.com/c++-faq-lite/misc-technical-issues.html) I read that: Whenever you use a class as a template parameter, the declaration of that class must be complete and not simply forward declared. So I'm stuck. I cannot specialize the templates after BPCFGParser definition, I cannot specialize them before BPCFGParser definition, how may I get this working? Thanks in advance, Onur

    Read the article

  • Cannot add Silverlight Maps Control to Windows Mobile 7 application

    - by Jacob
    I know the bits just came out today, but one of the first things I want to do with the newly released Windows Mobile 7 SDK is put a map up on the screen and mess around. I've downloaded the latest version of the Silverlight Maps Control and added the references to my application. As a matter of fact, the VS 2010 design view of the MainPage.xaml shows the map control after adding the namespace and placing the control. I'm using the provided VS 2010 Express version that comes with the Win Mobile 7 SDK and have just used the New Project - Windows Phone Application template. When I try to build I get two warnings related to the Microsoft.Maps.MapControl dll's. Warning 1 The primary reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". Warning 2 The primary reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". I'm leaning towards some way of adding the System.Windows.Browser to the targeted framework version. But I'm not even sure if that is possible. To be more specific; I'm looking for a way to get the Silverlight Maps Control up on a Windows Phone 7 series application. If possible. Thanks.

    Read the article

< Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >