Search Results

Search found 245 results on 10 pages for 'mistaken'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • Are two periods allowed in the local-part of an email address?

    - by Mike B
    A third-party email gateway relay is refusing to process a message for an email address we're sending to. The address is in the format of [email protected] (note the two periods). Is this allowed by RFC guidelines? RFC 2822 seems to object to this in section 3.4.1: The locally interpreted string is either a quoted-string or a dot-atom. If the string can be represented as a dot-atom (that is, it contains no characters other than atext characters or "." surrounded by atext characters), then the dot-atom form SHOULD be used and the quoted-string form SHOULD NOT be used. Comments and folding white space SHOULD NOT be used around the "@" in the addr-spec. Furthermore, in that same section, it references this: addr-spec = local-part "@" domain local-part = dot-atom / quoted-string / obs-local-part I interpret this to mean that the localpart can have content separated by dots but there cannot be two successive dots, and it cannot start or end with a dot. That being said, I'm not familiar with dot-atom syntax so maybe I'm mistaken here. Can someone please confirm and explain?

    Read the article

  • Combine multiple network interfaces to connect to a dedicated server

    - by Dženis Macanovic
    this is an underpaid employee writing, who's apparently responsible for all the IT stuff in a very small (non-IT) company. Today said company got a bunch of PCs/workstations, a switch, a computer that's supposed to be used as a router, two DSL connections (each 16 MBit/s downstream and 1 MBit/s upstream) and a dedicated server which is hosted and managed professionally by a larger local company with some decent connection speed (1 GBit/s both directions if I'm not mistaken). This is what I've set up (note I'm not making use of the second DSL connection at all)... ETH0 ETH1 [ SWITCH ]---[LINUX DEBIAN ROUTER]---[DSL MODEM 1]---[INTERNET] | | | PC1 | | PC2 | ... ... when my boss asked me, if it was somehow possible to get 32 MBit/s downstream and 2 MBit/s upstream. At that time I replied "no" without thinking too much about it. Now I've just had the following idea... ETH1 ETH0 ETH0 ,---[DSL MODEM 1 (NON-STATIC IP)]---, ,---, ETH0 [ SWITCH ]---[LINUX DEBIAN ROUTER] [INTERNET] [LINUX DEBIAN SERVER]---[INTERNET] | | | '---[ DSL MODEM 2 (STATIC IP) ]---' '---' PC1 | | ETH2 ETH0 PC2 | ... ... but I have absolutely no clue how to implement that. Would that even be possible? What would the masquerading rules look like on the router? What about the server? I didn't find anything on the internet, mainly because I couldn't come up with any good keywords to search for to begin with. English obviously isn't my first language. Thanks in advance for your time!

    Read the article

  • Moving an external hard drive while running

    - by user1108939
    I mean physically moving the drive around. I've never dealt with external hard drives before. Just plugged this wd mypassport to test the transfer rate. At one point I 'safely ejected' the drive. A minute later I decide to check the underside of the drive, not realizing the disk is still spinning. I lift the drive, rotating my writs about 70 degrees to the left... I hear a sequence of three high pitched sounds. I couldn't determine whether that was an indication beep by an internal security feature or the head scratching the plate (oh god...). Drive stops and usb power is disconnected. I reconnect it - it shows up fine - reads/writes. The drive was not reading/writing when i moved it. Did I damage my drive? Are these things that fragile? I thought them to be at least as durable as a standard 2.5" internal drive. Am I mistaken?

    Read the article

  • Why does C qicksort function implementation works much slower (tape comparations, tape swapping) than bobble sort function?

    - by Artur Mustafin
    I'm going to implement a toy tape "mainframe" for a students, showing the quickness of "quicksort" class functions (recursive or not, does not really matters, due to the slow hardware, and well known stack reversal techniques) comparatively to the "bubblesort" function class, so, while I'm clear about the hardware implementation ans controllers, i guessed that quicksort function is much faster that other ones in terms of sequence, order and comparation distance (it is much faster to rewind the tape from the middle than from the very end, because of different speed of rewind). Unfortunately, this is not the true, this simple "bubble" code shows great improvements comparatively to the "quicksort" functions in terms of comparison distances, direction and number of comparisons and writes. So I have 3 questions: Does I have mistaken in my implememtation of quicksort function? Does I have mistaken in my implememtation of bubblesoft function? If not, why the "bubblesort" function is works much faster in (comparison and write operations) than "quicksort" function? I already have a "quicksort" function: void quicksort(float *a, long l, long r, const compare_function& compare) { long i=l, j=r, temp, m=(l+r)/2; if (l == r) return; if (l == r-1) { if (compare(a, l, r)) { swap(a, l, r); } return; } if (l < r-1) { while (1) { i = l; j = r; while (i < m && !compare(a, i, m)) i++; while (m < j && !compare(a, m, j)) j--; if (i >= j) { break; } swap(a, i, j); } if (l < m) quicksort(a, l, m, compare); if (m < r) quicksort(a, m, r, compare); return; } } and the kind of my own implememtation of the "bubblesort" function: void bubblesort(float *a, long l, long r, const compare_function& compare) { long i, j, k; if (l == r) { return; } if (l == r-1) { if (compare(a, l, r)) { swap(a, l, r); } return; } if (l < r-1) { while(l < r) { i = l; j = l; while (i < r) { i++; if (!compare(a, j, i)) { continue; } j = i; } if (l < j) { swap(a, l, j); } l++; i = r; k = r; while(l < i) { i--; if (!compare(a, i, k)) { continue; } k = i; } if (k < r) { swap(a, k, r); } r--; } return; } } I have used this sort functions in a test sample code, like this: #include <stdio.h> #include <stdlib.h> #include <math.h> #include <conio.h> long swap_count; long compare_count; typedef long (*compare_function)(float *, long, long ); typedef void (*sort_function)(float *, long , long , const compare_function& ); void init(float *, long ); void print(float *, long ); void sort(float *, long, const sort_function& ); void swap(float *a, long l, long r); long less(float *a, long l, long r); long greater(float *a, long l, long r); void bubblesort(float *, long , long , const compare_function& ); void quicksort(float *, long , long , const compare_function& ); void main() { int n; printf("n="); scanf("%d",&n); printf("\r\n"); long i; float *a = (float *)malloc(n*n*sizeof(float)); sort(a, n, &bubblesort); print(a, n); sort(a, n, &quicksort); print(a, n); free(a); } long less(float *a, long l, long r) { compare_count++; return *(a+l) < *(a+r) ? 1 : 0; } long greater(float *a, long l, long r) { compare_count++; return *(a+l) > *(a+r) ? 1 : 0; } void swap(float *a, long l, long r) { swap_count++; float temp; temp = *(a+l); *(a+l) = *(a+r); *(a+r) = temp; } float tg(float x) { return tan(x); } float ctg(float x) { return 1.0/tan(x); } void init(float *m,long n) { long i,j; for (i = 0; i < n; i++) { for (j=0; j< n; j++) { m[i + j*n] = tg(0.2*(i+1)) + ctg(0.3*(j+1)); } } } void print(float *m, long n) { long i, j; for(i = 0; i < n; i++) { for(j = 0; j < n; j++) { printf(" %5.1f", m[i + j*n]); } printf("\r\n"); } printf("\r\n"); } void sort(float *a, long n, const sort_function& sort) { long i, sort_compare = 0, sort_swap = 0; init(a,n); for(i = 0; i < n*n; i+=n) { if (fmod (i / n, 2) == 0) { compare_count = 0; swap_count = 0; sort(a, i, i+n-1, &less); if (swap_count == 0) { compare_count = 0; sort(a, i, i+n-1, &greater); } sort_compare += compare_count; sort_swap += swap_count; } } printf("compare=%ld\r\n", sort_compare); printf("swap=%ld\r\n", sort_swap); printf("\r\n"); }

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • Is there any kind of established architecture for browser based games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that? EDIT: After reading the first answers I am under the impression of having made a mistake when including the "MMO" part into the title. The game will not be all fancy (i.e. 3D or such) on the client side and the logic will completely exist on the server. That is, apart from basic form validation for the user which will also be mirrored on the server side. So the target toolset will be HTML5, JavaScript, probably JQuery(UI). My question is more related to the software architecture/design of a system that enforces certain rules. Separation of ruleset and presentation One problem I am having is that I want to separate the game rules from the presentation. The first step would be to make an own module for the game "engine" that only exposes an interface that allows all actions to be taken in a clean way. If an action fails with regard to some pre/post condition, the engine throws an exception which is then presented to the user like "you cannot sell something you do not own" or "after that you would end up in a situation which is not a valid game state." The problem here is that I would like to be able to not even present invalid action in the first place or grey out the corresponding UI elements. Changing and tweaking the ruleset Another big thing is the ruleset. It will probably evolve over time and most definitely must be tweaked. What's more, it should be possible (to a certain extent) to build a ruleset that fits a specific game round, i.e. choosing different kinds of behaviours in different aspects of the game. This would do something like "we play it with extension A today but we throw out extension B." For me, this screams "Architectural/Design pattern" but I have no idea on who might have published on something like this, not even what to google for.

    Read the article

  • Relation between [[Prototype]] and prototype in JavaScript

    - by Claudiu
    From http://www.jibbering.com/faq/faq_notes/closures.html : Note: ECMAScript defines an internal [[prototype]] property of the internal Object type. This property is not directly accessible with scripts, but it is the chain of objects referred to with the internal [[prototype]] property that is used in property accessor resolution; the object's prototype chain. A public prototype property exists to allow the assignment, definition and manipulation of prototypes in association with the internal [[prototype]] property. The details of the relationship between to two are described in ECMA 262 (3rd edition) and are beyond the scope of this discussion. What are the details of the relationship between the two? I've browsed through ECMA 262 and all I've read there is stuff like: The constructor’s associated prototype can be referenced by the program expression constructor.prototype, Native ECMAScript objects have an internal property called [[Prototype]]. The value of this property is either null or an object and is used for implementing inheritance. Every built-in function and every built-in constructor has the Function prototype object, which is the initial value of the expression Function.prototype Every built-in prototype object has the Object prototype object, which is the initial value of the expression Object.prototype (15.3.2.1), as the value of its internal [[Prototype]] property, except the Object prototype object itself. From this all I gather is that the [[Prototype]] property is equivalent to the prototype property for pretty much any object. Am I mistaken?

    Read the article

  • Does Google Maps API v3 allow larger zoom values ?

    - by Dr1Ku
    If you use the satellite GMapType using this Google-provided example in v3 of the API, the maximum zoom level has a scale of 2m / 10ft , whereas using the v2 version of another Google-provided example (had to use another one since the control-simple doesn't have the scale control) yields the maximum scale of 20m / 50ft. Is this a new "feature" of v3 ? I have to mention that I've tested the examples in the same GLatLng regions - so my guess is that tile detail level doesn't influence it, am I mistaken ? As mentioned in another question, v3 is to be considered of very Labs-y/beta quality, so use in production should be discouraged for the time being. I've been drawn to the subject since I have to "increase the zoom level of a GMap", the answers here seem to suggest using GTileLayer, and I'm considering GMapCreator, although this will involve some effort. What I'm trying to achieve is to have a larger zoom level, a scale of 2m / 10ft would be perfect, I have a map where the tiles aren't that hi-res and quite a few markers. Seeing that the area doesn't have hi-res tiles, the distance between the markers is really tiny, creating some problematic overlapping. Or better yet, how can you create a custom Map which allows higher zoom levels, as by the Google Campus, where the 2m / 10ft scale is achieved, and not use your own tileserver ? I've seen an example on a fellow Stackoverflower's GMaps sandbox , where the tiles are manually created based on the zoom level. I don't think I'm making any more sense, so I'm just going to end this big question here, I've been wondering around trying to find a solution for hours now. Hope that someone comes to my aid though ! Thank you in advance !

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • Need help with buffer overrun.

    - by Morinar
    I've got a buffer overrun I absolutely can't see to figure out (in C). First of all, it only happens maybe 10% of the time or so. The data that it is pulling from the DB each time doesn't seem to be all that much different between executions... at least not different enough for me to find any discernible pattern as to when it happens. The exact message from Visual Studio is this: A buffer overrun has occurred in hub.exe which has corrupted the program's internal state. Press Break to debug the program or Continue to terminate the program. For more details please see Help topic 'How to debug Buffer Overrun Issues'. If I debug, I find that it is broken in __report_gsfailure() which I'm pretty sure is from the /GS flag on the compiler and also signifies that this is an overrun on the stack rather than the heap. I can also see the function it threw this on as it was leaving, but I can't see anything in there that would cause this behavior, the function has also existed for a long time (10+ years, albeit with some minor modifications) and as far as I know, this has never happened. I'd post the code of the function, but it's decently long and references a lot of proprietary functions/variables/etc. I'm basically just looking for either some idea of what I should be looking for that I haven't or perhaps some tools that may help. Unfortunately, nearly every tool I've found only helps with debugging overruns on the heap, and unless I'm mistaken, this is on the stack. Thanks in advance.

    Read the article

  • Updating Database from DataSet

    - by clawson
    I am having trouble updating my Database from my code using a DataSet. I'm using SQL Server 2008 and Visual Studio 2008. Here is what I've done so far. I have created a table in SQL Server called MyTable which has two columns: id nchar(10), and name nchar(50). I have then created a datasource in my VB.net project that consists of this table using the dataset wizard and called this dataset MyDataSet. I run the following code on a button click: Try Dim myDataSet As New MyDataSet Dim newRow As MyDataSet.MyTableRow = myDataSet.MyTable.NewMyTableRow newRow.id = "1" newRow.name = "Alpha" myDataSet.MyTable.AddMyTableRow(newRow) myDataSet.AcceptChanges() Catch ex As Exception MsgBox(ex.Message) End Try when I run this and check the rows in SQL Server it returns 0 rows What have I missed? How can I add these rows / save changes in a dataset to the database? I have seen other examples that use a TableAdapter but I don't think I want to do this, I think I should be able to achieve this just using a DataSet. Am I mistaken? Help is greatly appreciated!

    Read the article

  • Online voice chat: Why client-server model vs. peer-to-peer model?

    - by sstallings
    I am adding online voice chat to a Silverlight app. I've been reviewing current apps, services and SDKs found thru online searches and forums. I'm finding that the majority of these implement a client-server (C/S) model and I'm trying to understand why that model versus a peer-to-peer (PTP) model. To me PTP would be preferable because going direct between peers would be more efficient (fewer IP hops and no processing along the way by a server computer) and no need for a server and its costs and dependencies. I found some products offer the ability to switch from PTP to C/S if the PTP proves insufficient. As I thought more about it, I could see that C/S could be better if there are more than two peers involved in a conversation, then the server (supposedly with more bandwidth) could do a better job of relaying each peers outgoing traffic to the multiple other peers. In C/S many-to-many voice chatting, each peer's upstream broadband (which is where the bottleneck inherently is) would only have to carry each item of voice traffic once, then the server would use its superior bandwidth to relay the message to the multiple other peers. But, in a situation with one-on-one voice chatting it seems that PTP would be best. A server would not reduce each of the two peer's bandwidth requirements and would only add unnecessary overhead, dependency and cost. In one-on-one voice chatting: Am I mistaken on anything above? Would peer-to-peer be best? Would a server provide anything of value that could not be provided by a client-only program? Is there anything else that I should be taking into consideration? And lastly, can you recommend any Silverlight PTP or C/S voice chat products? Thanks in advance for any info.

    Read the article

  • chrome extension login security with iframe

    - by Weaver
    I should note, I'm not a chrome extension expert. However, I'm looking for some advice or high level solution to a security concern I have with my chrome extension. I've searched quite a bit but can't seem to find a concrete answer. The situation I have a chrome extension that needs to have the user login to our backend server. However, it was decided for design reasons that the default chrome popup balloon was undesirable. Thus I've used a modal dialog and jquery to make a styled popup that is injected with content scripts. Hence, the popup is injected into the DOM o the page you are visiting. The Problem Everything works, however now that I need to implement login functionality I've noticed a vulnerability: If the site we've injected our popup into knows the password fields ID they could run a script to continuously monitor the password and username field and store that data. Call me paranoid, but I see it as a risk. In fact,I wrote a mockup attack site that can correctly pull the user and password when entered into the given fields. My devised solution I took a look at some other chrome extensions, like Buffer, and noticed what they do is load their popup from their website and, instead, embed an iFrame which contains the popup in it. The popup would interact with the server inside the iframe. My understanding is iframes are subject to same-origin scripting policies as other websites, but I may be mistaken. As such, would doing the same thing be secure? TLDR To simplify, if I embedded an https login form from our server into a given DOM, via a chrome extension, are there security concerns to password sniffing? If this is not the best way to deal with chrome extension logins, do you have suggestions with what is? Perhaps there is a way to declare text fields that javascript can simply not interact with? Not too sure! Thank you so much for your time! I will happily clarify anything required.

    Read the article

  • Silverlight MVVM binding seems not to work

    - by Savvas Sopiadis
    Hi everybody! Building my first SL MVVM application (Silverlight4 RC) and have some issues i don't understand. Having a WPF background i don't know what is going on here: ViewModel has several properties, in which one is called SelectedRecord. This is a get only property and is defined like this: public Culture SelectedRecord { get { return culturesView.View.CurrentItem as Culture; } } As you can see it is gets the current value of a CollectionViewSource (called culturesView). So if i select a Culture, the SelectedRecord (gets a value directly from within the CollectionViewSource) as expected. (Actually there is a datagrid control bound to the CollectionViewSource, hence it is possible to change the selected item) OK. Now to the View . There are several views which access this ViewModel and in particular there is one which shows the values of the aforementioned property SelectedRecord (let's call it the EditView). To show this EditView there is a button (which has its Command property bound to an ICommand in the ViewModel) which functions (the first time) as expected. This means: 1st try : i select a record, switch to EditView, outcome: selected record values are shown (as expected!!). 2nd try: switch back to datagrid, select another record, switch to EditView, outcome: the values of the previous shown record are shown again!!! WHY?? First i thought that the SelectedRecord has not the correct value set, but i was mistaken: it HAS the correct value! So it should be shown!? What am i missing? In WPF this would work!! Thanks in advance

    Read the article

  • Are there scenarios where the ViewModel needs to invoke methods on the View w.r.t. MVVM in WPF?

    - by Gishu
    As per the pattern, the ViewModel exposes Properties(with change notification) and Commands (to notify the VM of user actions) that the View binds to. The only communication that flows from the VM to the View is the property change notifications (so that the View can refresh itself with updated data). In MVP or PresentationModel form of the pattern (if I'm not mistaken), the View implements a plain vanilla interface (consisting of methods, properties and/or events). With MVVM, it feels methods on the IView have been outlawed (along with IView itself). One scenario I could think of was to set the focus to a certain control in the View. (When the user does ActionX, the focus should immediately be set to FieldY). In MVP, I'd write this as IView.ActivateField(NameConstant), which the presenter or PM would invoke. In MVVM, this seems to be a fringe case that needs a workaround / little bit of code-behind. The VM implements an ActiveField Property, which it sets to NameConstant. The view picks up the change notification event and in a code-behind event handler, activates the Name control. Is the above just an exception to the norm? Or are there other such scenarios, where the VM needs to invoke a method on the View ?

    Read the article

  • Dealing with Imprecise Drawing in CAD Drawing

    - by Graviton
    I have a CAD application, that allows user to draw lines and polygons and all that. One thorny problem that I face is user drawing can be highly imprecise, for example, a user might want to draw two rectangles that are connected to each other. Hence there should be one line shared by two rectangles. However, it's easy for user to, instead of draw a line, draw two lines that are very close to each other, so close to each other that when look from the screen, you would be mistaken that they are the same line, except that they aren't when you zoom in a little bit. My application would require user to properly draw the lines ( or my preprocessing must be able to do auto correction), or else my internal algorithm would not be able to process the inputs correctly. What is the best strategy to combat this kind of problem? I am thinking about rounding the point coordinates to a certain degree of precision, but although I can't exactly pinpoint the problem of this approach, but I feel that this is not the correct way of doing things, that this will introduce a new set of problem. Any idea?

    Read the article

  • SQL Selects on subsets

    - by Adam
    I need to check if a row exists in a database; however, I am trying to find the way to do this that offers the best performance. This is best summarised with an example. Let's assume I have the following table: dbo.Person( FirstName varchar(50), LastName varchar(50), Company varchar(50) ) Assume this table has millions of rows, however ONLY the column Company has an index. I want to find out if a particular combination of FirstName, LastName and Company exists. I know I can do this: IF EXISTS(select 1 from dbo.Person where FirstName = @FirstName and LastName = @LastName and Company = @Company) Begin .... End However, unless I'm mistaken, that will do a full table scan. What I'd really like it to do is a query where it utilises the index. With the table above, I know that the following query will have great performance, since it uses the index: Select * from dbo.Person where Company = @Company Is there anyway to make the search only on that subset of data? e.g. something like this: select * from ( Select * from dbo.Person where Company = @Company ) where FirstName = @FirstName and LastName = @LastName That way, it would only be doing a table scan on a much narrower collection of data. I know the query above won't work, but is there a query that would? Oh, and I am unable to create temporary tables, as the user will only have read access.

    Read the article

  • Creating a draft version of the page before publishing in Drupal 6?

    - by Marco
    I've been looking for a good way to handle revisions in Drupal, but I am yet to succeed. For some reason there is no built in way to save a draft (that I've found so far), and the modules I've tried so far do not seem to fully work. First I tried save_as_draft, which seemed to do almost what I wanted, and if I'm not mistaken, also handles CCK fields. Sadly it seems to be broken somehow, so I can't edit a page once I've saved it as a draft.. maybe I could fix it by going through the code, but that would not be my preferred solution. The other module I tried is aptly named draft, but from what I can tell, this module only handles the title and body fields, and does this in a way that appear odd to me. Is there some common practice to solve this? I couldn't imagine that nobody had to solve this before, but I haven't found any good solution to it yet. Clarification I need this functionality for already existing content, that is, I want to be able to create and edit a draft version of an already published page, while the "old" version would still be available to anonymous users.

    Read the article

  • Authentication Error when accessing Sharepoint list via web service

    - by Joe
    I wrote a windows service a few months ago that would ping a Sharepoint list using _vti_bin/lists.asmx function GetListItemChanges. It was working fine until a few weeks ago when my company upgraded our Sharepoint instance to SP1. Now whenever my service attempts to access Sharepoint I receive an 401.1 authentication error: Error: You are not authorized to view this page You do not have permission to view this directory or page using the credentials that you supplied. Please try the following: Contact the Web site administrator if you believe you should be able to view this directory or page. HTTP Error 401.1 - Unauthorized: Access is denied due to invalid credentials. Internet Information Services (IIS) I have checked and my privileges on the site have not changed. here is the code In which I call the list: Lists listsService = new Lists(); listsService.Credentials = new NetworkCredential("UserName", "Password", "domain"); Result = listsService.GetListItemChanges("List name", null, dTime.ToString(), null); It has also been brought to my attention that basic authentication may have been disabled on our farm. I don't believe I'm using that but I may be mistaken.

    Read the article

  • E-Commerce Security: Only Credit Card Fields Encrypted?!

    - by bizarreunprofessionalanddangerous
    I'd like your opinions on how a major bricks-and-mortar company is running the security for its shopping Web site. After a recent update, when you are logged into your shopping account, the session is now not secured. No 'https', no browser 'lock'. All the personal contact info, shopping history -- and if I'm not mistaken submit and change password -- are being sent unencrypted. There is a small frame around the credit card fields that is https. There's a little notice: "Our website is secure. Our website uses frames and because of this the secure icon will not appear in your browser" On top of this the most prominent login fields for the site are broken, and haven't gotten fixed for a week or longer (giving the distinct impression they have no clue what's going on and can't be trusted with anything). Now is it just me -- or is this simply incomprehensible for a billion dollar company, significant shopping site, in the year 2010. No lock. "We use frames" (maybe they forget "Best viewed in IE4"). Customers complaining, as you can see from their FAQ "explaining" why you aren't seeing https. I'm getting nowhere trying to convince customer service that they REALLY need to do something about this, and am about to head for the CEO. But I just want to make sure this is as BIZARRE and unprofessional and dangerous a situation as I think it is. (I'm trying to visualize what their Web technical team consists of. I'm getting A) some customer service reps who were given a 3 hour training course on Web site maintenance, B) a 14 year old boy in his bedroom masquerading as a major technical services company, C) a guy in a hut in a jungle with an e-commerce book from 1996.)

    Read the article

  • How much does Website Development cost nowadays?

    - by Andreas Grech
    I am thinking of setting up my own freelance business but coming from a workplace that offers a particular service to huge clients, I do not know what are the current charges for websites are nowadays. I know that as technology just keeps changing and changing (most of the time, for the better...), the amount you charge for a single website is constantly differing. Like for example, I don't think static websites (with just static html pages) are that expensive today, no? (as i said, I might be mistaken since I haven't really touched on this freelance industry yet) So, freelance web-developers out there, can you give me estimates on how much you charge for your clients? Some examples of websites that I want to know an approx charge: ~10 static html pages ~10 dhtml pages (with maybe a flasy menu on the top/side) Database driven websites with a standard CMS (be it the one you developed, or an existing one) Database driven but with a custom-built cms for the particular client Using an existing template for a design Starting the design from scratch etc... I know that the normally clients don't really care about the technologies used to construct their websites, but do you charge differently according to which technology you use to build the website with?; as in, is the technology a factor when setting the price? ...being ASP.Net, PHP, Ruby On Rails etc... Also, how do you go on about charging your clients for your services? What are the major factors that you consider when setting a price tag for a website to a client ? And better yet, how do you even find prospective clients? <= [or should I leave this question for a different post?] Btw, in your post, also mention some numbers (in cash values, be it in USD, GBP, EUR or anything) because I want to be able to take calculate some averages from this post when some answers stack up

    Read the article

  • hibernate c3p0 broken pipe

    - by raven_arkadon
    Hi, I'm using hibernate 3 with c3p0 for a program which constantly extracts data from some source and writes it to a database. Now the problem is, that the database might become unavailable for some reasons (in the simplest case: i simply shut it down). If anything is about to be written to the database there should not be any exception - the query should wait for all eternity until the database becomes available again. If I'm not mistaken this is one of the things the connection pool could do for me: if there is a problem with the db, just retry to connect - in the worst case for infinity. But instead i get a broken pipe exception, sometimes followed by connection refused and then the exception is passed to my own code, which shouldn't happen. Even if I catch the exception, how could i cleanly reinitialize hibernate again? (So far without c3p0 i simply built the session factory again, but i wouldn't be surprised if that could leak connections (or is it ok to do so?)). The database is Virtuoso open source edition. My hibernate.xml.cfg c3p0 config: <property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="hibernate.c3p0.breakAfterAcquireFailure">false</property> <property name="hibernate.c3p0.acquireRetryAttempts">-1</property> <property name="hibernate.c3p0.acquireRetryDelay">30000</property> <property name="hibernate.c3p0.automaticTestTable">my_test_table</property> <property name="hibernate.c3p0.initialPoolSize">3</property> <property name="hibernate.c3p0.minPoolSize">3</property> <property name="hibernate.c3p0.maxPoolSize">10</property> btw: The test table is created and i get tons of debug output- so it seems it actually reads the config.

    Read the article

  • Hibernate MapKeyManyToMany gives composite key where none exists

    - by larsrc
    I have a Hibernate (3.3.1) mapping of a map using a three-way join table: @Entity public class SiteConfiguration extends ConfigurationSet { @ManyToMany @MapKeyManyToMany(joinColumns=@JoinColumn(name="SiteTypeInstallationId")) @JoinTable( name="SiteConfig_InstConfig", joinColumns = @JoinColumn(name="SiteConfigId"), inverseJoinColumns = @JoinColumn(name="InstallationConfigId") ) Map<SiteTypeInstallation, InstallationConfiguration> installationConfigurations = new HashMap<SiteTypeInstallation, InstallationConfiguration>(); ... } The underlying table (in Oracle 11g) is: Name Null Type ------------------------------ -------- ---------- SITECONFIGID NOT NULL NUMBER(19) SITETYPEINSTALLATIONID NOT NULL NUMBER(19) INSTALLATIONCONFIGID NOT NULL NUMBER(19) The key entity used to have a three-column primary key in the database, but is now redefined as: @Entity public class SiteTypeInstallation implements IdResolvable { @Id @GeneratedValue(generator="SiteTypeInstallationSeq", strategy= GenerationType.SEQUENCE) @SequenceGenerator(name = "SiteTypeInstallationSeq", sequenceName = "SEQ_SiteTypeInstallation", allocationSize = 1) long id; @ManyToOne @JoinColumn(name="SiteTypeId") SiteType siteType; @ManyToOne @JoinColumn(name="InstalationRoleId") InstallationRole role; @ManyToOne @JoinColumn(name="InstallationTypeId") InstType type; ... } The table for this has a primary key 'Id' and foreign key constraints+indexes for each of the other columns: Name Null Type ------------------------------ -------- ---------- SITETYPEID NOT NULL NUMBER(19) INSTALLATIONROLEID NOT NULL NUMBER(19) INSTALLATIONTYPEID NOT NULL NUMBER(19) ID NOT NULL NUMBER(19) For some reason, Hibernate thinks the key of the map is composite, even though it isn't, and gives me this error: org.hibernate.MappingException: Foreign key (FK1A241BE195C69C8:SiteConfig_InstConfig [SiteTypeInstallationId])) must have same number of columns as the referenced primary key (SiteTypeInstallation [SiteTypeId,InstallationRoleId]) If I remove the annotations on installationConfigurations and make it transient, the error disappears. I am very confused why it thinks SiteTypeInstallation has a composite key at all when @Id is clearly defining a simple key, and doubly confused why it picks exactly just those two columns. Any idea why this happens? Is it possible that JBoss (5.0 EAP) + Hibernate somehow remembers a mistaken idea of the primary key across server restarts and code redeployments? Thanks in advance, -Lars

    Read the article

  • How do I ensure my C# software can access the internet in a Citrix + ISA environment?

    - by TomFromThePool
    Hi everyone, A client recently informed us that deployment of our software in their environment has failed due to a proxy error when the software attempts to access the internet. The client has a combination of Citrix and Microsoft's ISA server. The software allows the use of a proxy and the ability to manually enter authentication information, or automatically retrieve the current system proxy settings. The error returned is the standard 407 authentication error, but the client assures us that they have entered the authentication information required. They have also shown us the snippet of the ISA error logs which identify the client as Anonymous and the authentication protocol as Basic. I have a few questions I suppose: How should I go about dealing with the ISA server in my code? I have no real experience with this environment and am assuming that the ISA server is treated like any other proxy. If I am mistaken, what should I be doing? Does ISA allow the administrator to disallow specific authentication protocols - and if this is the case and 'Basic' auth is disallowed, would it still return a 407 error? Could the Citrix environment have caused this issue? Is there any particular way to ensure that my software will work in such an environment? Code-samples would be much appreciated. I have neither a Citrix test server or an ISA server at my disposal to carry out testing on this so I am currently trying to identify possible causes before I make the case for investment in a more robust testing environment. Thanks for any help!

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >