Search Results

Search found 3440 results on 138 pages for 'returning'.

Page 65/138 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Webservice Return Generic Result Type or Purposed Result Type

    - by hanzolo
    I'm building a webservice which returns JSON / XML / SOAP at the moment.. and I'm not entirely sure which approach for returning results is best. Which would be a better return value? A generic "transfer" type structure, which carries Generic properties or a purposed type with distinct properties: class GenericTransferObject{ public string returnVal; public string returnType; } VS class PurposedTransferObject_1{ public string Property1; } //and then building additional "types" for additional values class PurposedTransferObject_2 { public string PropertyA; public string PropertyB; } Now, this would be the serialized and returned from a web service call via some client technology, JQuery in this example. SO if I called: /GetDaysInWeek/ I would either get back: {"returnType": "DaysInWeek", "returnVal": "365" } OR {"DaysInWeek": "365"} And then it would go from there. On the one hand there's flexibilty with the 1st example. I can add "returnTypes" without needing to adjust the client other than referencing an additional "index".. but if I had to add a property, now i'm changing a structure definition.. Is there an obvious choice in this situation?

    Read the article

  • Deploy multiple emails to email providers, but without showing favouritism

    - by Ardman
    We are currently developing a new email deployment system. We have the system currently configured so that it reads a record from the database and loads the email content and deploys it to the target. Now we want to move this over to multiple threads. That is easily done, except we then hit the email providers returning SMTP codes referring to "Too many connections", or "Deferred connection". The solution to this is to have a thread open up a connection to the email provider and deploy n emails and then disconnect. We have currently configured the application so that it will support these session based email deployments. The problem is this, the database table has multiple email addresses in and they aren't grouped by email provider because that will show favouritism. We need to be able to retrieve a set number of, i.e. Hotmail, emails (@hotmail.com, @hotmail.co.uk, @live.co.uk) so that we are reducing the number of connections to Hotmail and reducing the risks of getting the "Too many connections" error. We are at the point now where we have gone round and round in circles trying to get a solution, so I thought I'd throw it out there and see if anyone has any ideas? EDIT I would like to stress that this application is not used for spamming purposes.

    Read the article

  • What Does Installing Ubuntu "Alongside" Windows Entail?

    - by Soft Skeleton
    I recently posted a question about an error I was receiving trying to access Ubuntu from the boot menu. I am using Windows 7 and Ubuntu 12.x (I THINK because I haven't accessed it in over a year due to being unable to run an important program for one of my classes on Ubuntu). On another laptop, I partitioned the hard drive and installed Windows and Ubuntu on the partitions. On this laptop, I simply installed Ubuntu from Windows, picking the option "alongside Windows", and didn't partition my hard drive manually. I was under the impression "alongside" entailed that Ubuntu would partition my hard drive, and that if I were to return my Windows partition to factory settings it would not affect the Ubuntu partition. However, given my current problem, I am wondering if I was mistaken in this assumption? When installing Ubuntu from Windows, selecting "alongside" Windows as the option from the Ubuntu installer, does that simply install Ubuntu within the Windows partition and thus returning it to factory settings would wipe out anything I had on the Ubuntu OS as well? Ubuntu is still in the boot menu as an option, but when I try to access it it says the drive is "corrupt" and wubi is mentioned in the error. I additionally tried to download a program ran from Windows to investigate partitions and there were no sign of my Ubuntu partition viewable from Windows. Is it possible Windows just can't see it? Any insight, corrections or answers is appreciated.

    Read the article

  • Web Services and code lists

    - by 0x0me
    Our team heavily discuss the issues how to handle code list in a web service definition. The design goal is to describe a provider API to query a system using various values. Some of them are catalogs resp. code lists. A catalog or code list is a set of key value pairs. There are different systems (at least 3) maintaining possibly different code lists. Each system should implement the provider API, whereas each system might have different code list for the same business entity eg. think of colors. One system know [(1,'red'),(2,'green')] and another one knows [(1,'lightgreen'),(2,'darkgreen'),(3,'red')] etc. The access to the different provider API implementations will be encapsulated by a query service, but there is already one candidate which might use at least one provider API directly. The current options to design the API discussed are: use an abstract code list in the interface definition: the web service interface defines a well known set of code list which are expected to be used for querying and returning data. Each API provider implementation has to mapped the request and response values from those abstract codelist to the system specific one. let the query component handle the code list: the encapsulating query service knows the code list set of each provider API implementation and takes care of mapping the input and output to the system specific code lists of the queried system. do not use code lists in the query definition at all: Just query code lists by a plain string and let the provider API implementation figure out the right value. This might lead to a loose of information and possibly many false positives, due to the fact that the input string could not be canonical mapped to a code list value (eg. green - lightgreen or green - darkgreen or both) What are your experiences resp. solutions to such a problem? Could you give any recommendation?

    Read the article

  • Are DDD Aggregates really a good idea in a Web Application?

    - by Mystere Man
    I'm diving in to Domain Driven Design and some of the concepts i'm coming across make a lot of sense on the surface, but when I think about them more I have to wonder if that's really a good idea. The concept of Aggregates, for instance makes sense. You create small domains of ownership so that you don't have to deal with the entire domain model. However, when I think about this in the context of a web app, we're frequently hitting the database to pull back small subsets of data. For instance, a page may only list the number of orders, with links to click on to open the order and see its order id's. If i'm understanding Aggregates right, I would typically use the repository pattern to return an OrderAggregate that would contain the members GetAll, GetByID, Delete, and Save. Ok, that sounds good. But... If I call GetAll to list all my order's, it would seem to me that this pattern would require the entire list of aggregate information to be returned, complete orders, order lines, etc... When I only need a small subset of that information (just header information). Am I missing something? Or is there some level of optimization you would use here? I can't imagine that anyone would advocate returning entire aggregates of information when you don't need it. Certainly, one could create methods on your repository like GetOrderHeaders, but that seems to defeat the purpose of using a pattern like repository in the first place. Can anyone clarify this for me?

    Read the article

  • WiFi slow sometimes, reboot helps, how do I debug it?

    - by January
    Ubuntu 12.04.1 with all updates installed. Laptop Lenovo Thinkpad X230 with Intel Corporation Centrino Advanced-N 6205. WiFi sometimes becomes extremely slow. Often this occurs when I wake the system from suspend and connect to a different network. I find no obvious clues in system logs. /etc/init.d/network-manager restart doesn't help, but a reboot does. How can I go on with debugging this issue? In specific, which parts of the system should I try to restart (without a complete reboot)? I know of problems with Intel WiFi (see for example this question and the instructions here), but if that was the problem, I would expect the WiFi to be slow at all times, and not just sometimes. Also, I have a gut feeling that it might be a DNS issue (for example, getting a page from a known server is faster than accessing a new server), but I don't know how to tackle it. Update: despite numerous updates in the meanwhile, I still observe this behavior. It happens always when I access my WiFi router at home after returning from work; when I reboot my laptop, the connection speed is good again.

    Read the article

  • How do you handle animations that are for transitioning between states?

    - by yaj786
    How does one usually handle animations that are for going between a game object's states? For example, imagine a very simple game in which a character can only crouch or stand normally. Currently, I use a custom Animation class like this: class Animation{ int numFrames; int curFrame; Bitmap spriteSheet; //... various functions for pausing, returning frame, etc. } and an example Character class class Character{ int state; Animation standAni; Animation crouchAni; //... etc, etc. } Thus, I use the state of the character to draw the necessary animation. if(state == STATE_STAND) draw(standAni.updateFrame()); else if(state == STATE_CROUCH) draw(crouchAni.updateFrame()); Now I've come to the point where I want to draw "in-between" animations, because right now the character will just jump immediately into a crouch instead of bending down. What is a good way to handle this? And if the way that I handle storing Animations in the Character class is not a good way, what is? I thought of creating states like STATE_STANDING_TO_CROUCHING but I feel like that may get messy fast.

    Read the article

  • AndEngine GLES2 - getting Black screen on emulator 4.1

    - by dizworld.com
    I'm new in andengine . I create following code public class MainActivity extends BaseGameActivity { static final int CAMERA_WIDTH = 800; static final int CAMERA_HEIGHT = 480; public Font mFont; public Camera mCamera; //A reference to the current scene public Scene mCurrentScene; public static BaseActivity instance; public EngineOptions onCreateEngineOptions() { instance = this; mCamera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); return new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(CAMERA_WIDTH, CAMERA_HEIGHT), mCamera); } @Override public void onCreateResources(OnCreateResourcesCallback arg0) throws Exception { mFont = FontFactory.create(this.getFontManager(),this.getTextureManager(), 256, 256,Typeface.create(Typeface.DEFAULT, Typeface.BOLD), 32); mFont.load(); } @Override public void onCreateScene(OnCreateSceneCallback arg0) throws Exception { mEngine.registerUpdateHandler(new FPSLogger()); mCurrentScene = new Scene(); Log.v("Scene","enter"); mCurrentScene.setBackground(new Background(0.09804f, 0.7274f, 0.8f)); // return mCurrentScene; } @Override public void onPopulateScene(Scene arg0, OnPopulateSceneCallback arg1) throws Exception { // TODO Auto-generated method stub } } I got code on sites there is returning scene but in AndEngine GLES2 in method onCreateScene() there is no return scene ... so my First run is BLACK .. any suggestion :)

    Read the article

  • What is a 'good number' of exceptions to implement for my library?

    - by Fuzz
    I've always wondered how many different exception classes I should implement and throw for various pieces of my software. My particular development is usually C++/C#/Java related, but I believe this is a question for all languages. I want to understand what is a good number of different exceptions to throw, and what the developer community expect of a good library. The trade-offs I see include: More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code More exception classes can mean more code maintenance More exception classes can mean the API is less approachable to users The scenarios I wish to understand exception usage in include: During 'configuration' stage, which might include loading files or setting parameters During an 'operation' type phase where the library might be running tasks and doing some work, perhaps in another thread Other patterns of error reporting without using exceptions, or less exceptions (as a comparison) might include: Less exceptions, but embedding an error code that can be used as a lookup Returning error codes and flags directly from functions (sometimes not possible from threads) Implemented an event or callback system upon error (avoids stack unwinding) As developers, what do you prefer to see? If there are MANY exceptions, do you bother error handling them separately anyway? Do you have a preference for error handling types depending on the stage of operation?

    Read the article

  • Getting the total number of processors a computer has (c#)

    - by mbcrump
    Here is a code snippet for getting the total number of processors a computer has without using Environment.ProcessorCount. I found out that Environment.ProcessorCount is not necessary returning the correct value on some Intel based CPU’s.   using System; usingSystem.Collections.Generic; usingSystem.Linq; usingSystem.Text; usingSystem.Globalization; usingSystem.Runtime.InteropServices; namespaceConsoleApplication4 {     classProgram    {         static voidMain(string[] args)         {             int c = ProcessorCount;             Console.WriteLine("The computer has {0} processors", c);             Console.ReadLine();         }         private static classNativeMethods        {             [StructLayout(LayoutKind.Sequential)]             internal struct SYSTEM_INFO            {                 public ushort wProcessorArchitecture;                 public ushort wReserved;                 public uint dwPageSize;                 publicIntPtr lpMinimumApplicationAddress;                 publicIntPtr lpMaximumApplicationAddress;                 publicUIntPtr dwActiveProcessorMask;                 public uint dwNumberOfProcessors;                 public uint dwProcessorType;                 public uint dwAllocationGranularity;                 public ushort wProcessorLevel;                 public ushort wProcessorRevision;             }             [DllImport("kernel32.dll", CharSet = CharSet.Auto, ExactSpelling = true)]             internal static extern voidGetNativeSystemInfo(refSYSTEM_INFOlpSystemInfo);         }         public static int ProcessorCount         {             get            {                 NativeMethods.SYSTEM_INFOlpSystemInfo = newNativeMethods.SYSTEM_INFO();                 NativeMethods.GetNativeSystemInfo(reflpSystemInfo);                 return(int)lpSystemInfo.dwNumberOfProcessors;             }         }     } }

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Emulate Historical Figures i.e. Einstein - Is this possible using linguistic logic for my http://www.ustimeline.com Education System

    - by Johnnylight
    After hearing about the success of IBM's Watson I started thinking perhaps emulating human language is now possible? My goal is to create Virtual Historical characters to represent the main characters in my Adventur-Cation The Great American Adventure program such as Einstein or Crazy Horse. The goal is to build an intelligent system capable of indexing the internet and storing the data using a schema using modern knowledge on linguistic theory (phonemes, morphemes, syntax) to build a system capable to returning a semantically sound response very similar to the response made by the same person if still alive today. The goal would be to use the same engine/system for all characters. Each characters would have their own digital representation and voice, and would organize data differently based on tags/keywords stored about the individual. Imagine a Max Headroom Einstein. Based on the success of Watson, I believe something like this may now be possible. Would be an interesting way to study history and would be a vehicle of entertainment as well. Can anyone confirm if this has already been attempted? Is anyone interested in exploring this using Cognitive Science, Psychology, Artificial Intelligence, Historical data captured on the internet, and Linguistic theory?

    Read the article

  • fwupd to update OCZ RevoDrive firmware gets a 'Permission denied' error

    - by Late
    Try as I might I just don't get it working (http://www.ocztechnology.com/ssd_tools/OCZ_RevoDrive_and_RevoDrive_X2/). There's another thread here with an similar issue here: Command is giving me "bash: ./fwupd: cannot execute binary file" I'm running command ./fwupd /dev/sdb which keeps returning me bash: ./fwupd: Permission denied I have tried running both bit versions available of fwupd with both of the latest 32- and 64-bit Ubunty 11.10, running the OS from an USB stick, but to no avail (could this be the problem?). In the other thread it was suggested that chmod +x fwupd (or chmod 0755 fwupd) should resolve this issue, but at least for me it has been for naught. It was also suggested to install certain libraries, but those were already included in the Ubunty build and I didn't have any luck after updating with apt-get. I also tried giving fwupd more privileges, r, x and w but same charade, run it in different ways from different places (where I'd have the fwupd present, ofc) among other things. What I also tried is giving the Ubunty 10.04 LTS a shot but it didn't even launch on either of my computers, though that's not the issue here. If anyone has any ideas on what the problem is and how I could get this working, it would be most appreciated!

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • Release Notes for 6/14/2012

    Here are the notes for this week’s release: Diffs in Pull Requests and Commits We altered the way we display diffs across commits and pull requests to maximize the amount of vertical real estate devoted to the diff. Before, the viewport for diffs was always snapped to the height of the browser, which meant that on lower resolutions, the amount of space for viewing diffs could become very tiny. Now, the majority of the browser vertical space is devoted to viewing the diffs. Let us know what you think! Bug Fixes Fixed an issue where returning to the list of files changed from a diff would sometimes not show the list of files. Fixed the dialogs for approving and denying requests to join projects. Fixed various issues around validation of project details when publishing a project. Fixed an issue that caused the formatting of our tabs in pull requests to not display properly. Fixed an issue where users browsing Unicode files in a Git project would see error pages. Fixed various issues where the option to subscribe to notifications would not appear properly. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Oracle EZConnect in Mediawiki

    - by raindog308
    Mediawiki supports Oracle and I'm trying to configure it in the installer. The installers says you can use EZConnect...something like: user/pass@//server.example.com/dbname or since the installer has fields elsewhere for user/pass server.example.com/dbname The installer includes a link to the EZConnect docs: http://docs.oracle.com/cd/E11882_01/network.112/e10836/naming.htm. All the examples in that doc include a forward slash. But every combination I've tried results in an error like this: Invalid database TNS "sever.example.com/service_name". Use only ASCII letters (a-z, A-Z), numbers (0-9), underscores (_) and dots (.). I can't find any examples of EZConnect that don't include a forward slash. That error is from Mediawiki, not Oracle. I'm tailing the listener log and there is no connection made - Medaiwiki is returning an error without trying to connect. I'm using php OCI8 with the Oracle instant client. I don't have a tnsnames.ora setup for this client - which is kind of the point of EZConnect. I did write a test php script that connects via oci_connect just fine. Has anyone configured Mediawiki to use Oracle with EZConnect? If so, what did you use in the installer?

    Read the article

  • Is IDirectInput8::FindDevice totally broken on Windows 7?

    - by Noora
    I'm developing on Windows 7, and using DirectInput8 for my input needs. I'm tracking gamepad additions and removals (that is, GUID_DEVINTERFACE_HID devices) using the DBT_DEVICEARRIVAL and DBT_DEVICEREMOVECOMPLETE messages, which works fine. However, what I've come to find out is that no matter what I do, passing the received values from DBT_DEVICEARRIVAL to IDirectInput8's FindDevice method, it will always fail to identify the device, returning DIERR_DEVICENOTREG. DirectInput still clearly knows about the device, because I can enumerate and create it just fine. I've tried with three different gamepads, and the error persists, so it's not about that either. I also tried passing some alternative interface GUIDs for the RegisterDeviceNotification call, didn't help. So, has anyone else faced the same problem, and have you found a usable workaround? I'm afraid I'll soon have to stoop down to re-enumerating all devices when something is added or removed, but I'll first give this question one last shot here. EDIT: For the record, I've also tried pretty much every single HID API & SetupAPI function for alternative ways of figuring out the needed GUIDs, with zero success. So if you're facing the same problem as me, don't bother with that route. I'm pretty sure those GUIDs are made up by DirectInput itself somehow. Short of reverse engineering dinput8.dll, I'm truly out of ideas now.

    Read the article

  • How to debug a fatal system crash - [graphical loop DOTA2]?

    - by Huw
    Whilst playing DOTA2, I occasionally and apparently randomly seem to be experiencing a fatal crash where the display freezes fixed and the audio loops over approx the last .5 of a second. Now, I'm interested in resolving this - but my trouble is I don't know where to start. The error appears non-reproducible (I've tried returning to games and deploying the same combination of events in hopes of pinning to to a certain shader etc), and I don't know which part of 'the stack' it might be coming from. Variables that occur to me: I custom build this system, did I do something wrong - is my PSU not providing enough power to the graphics card? I am running Steam and DOTA under Linux, could this new software have a bug Might it be something to do with my ATI Catalyst graphics drivers Is some other background process interfering I'm usually mid game when this occurs, so i quickly kill the power and reboot (when i'm lucky i can get back in with only 1-2 mins lost!). So my question here relates to logs. Where should I start to look or how might I set up logs to help me pin down a fatal crash of this kind by recording moments up to / before a crash and is this likely to be something I should push Steam to do, or is there something at a system level? Then perhaps I can return with a more specific question and perhaps even a bug report :) Many thanks in advance.

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Strange traffic on fresh Ubuntu Server install

    - by Fishy
    I've just installed Ubuntu Server on my home box after becoming partially familiar with it at work and wanting to train up as a Pen Tester. I installed the latest version on a logical partition (the main one contained Win7), and selected none of the extra modules (I think). I installed ngrep and fired it up (along with TCPdump) and immediately saw some strange traffic which I am unable to identify. My pc is sending out UDP packets every couple of seconds to a seemingly random series of IP addresses, all on the same port (47669 - though I did also see it use another port for a while). I watched it do this for about 20 mins, whilst trying to work out why it was doing it. The only other traffic was the odd ARP request for the router and SSDP UPnP broadcasts from the router. Anyone know what this is, or have any advice on how best to find out? Thanks. EDIT: Actually, it's not my box generating the traffic. It's receiving the traffic on that port, from a series of IP addresses, and returning 'port unreachable' messages.

    Read the article

  • When decomposing a large function, how can I avoid the complexity from the extra subfunctions?

    - by missingno
    Say I have a large function like the following: function do_lots_of_stuff(){ { //subpart 1 ... } ... { //subpart N ... } } a common pattern is to decompose it into subfunctions function do_lots_of_stuff(){ subpart_1(...) subpart_2(...) ... subpart_N(...) } I usually find that decomposition has two main advantages: The decomposed function becomes much smaller. This can help people read it without getting lost in the details. Parameters have to be explicitly passed to the underlying subfunctions, instead of being implicitly available by just being in scope. This can help readability and modularity in some situations. However, I also find that decomposition has some disadvantages: There are no guarantees that the subfunctions "belong" to do_lots_of_stuff so there is nothing stopping someone from accidentally calling them from a wrong place. A module's complexity grows quadratically with the number of functions we add to it. (There are more possible ways for things to call each other) Therefore: Are there useful convention or coding styles that help me balance the pros and cons of function decomposition or should I just use an editor with code folding and call it a day? EDIT: This problem also applies to functional code (although in a less pressing manner). For example, in a functional setting we would have the subparts be returning values that are combined in the end and the decomposition problem of having lots of subfunctions being able to use each other is still present. We can't always assume that the problem domain will be able to be modeled on just some small simple types with just a few highly orthogonal functions. There will always be complicated algorithms or long lists of business rules that we still want to correctly be able to deal with. function do_lots_of_stuff(){ p1 = subpart_1() p2 = subpart_2() pN = subpart_N() return assembleStuff(p1, p2, ..., pN) }

    Read the article

  • Can't Log Into Ubuntu 12.04

    - by Razick
    Yesterday, after turning on Ubuntu, I logged into a Gnome session. A few minutes later, I tried switching to Unity for a change. Unfortunately, the background and my desktop icons loaded, but the system bar and launcher failed to load even after several minutes. Unity had always worked fine for me. I then tried the guest account, and it worked fine on both Unity and Gnome. However, the problem with my account got worse; I couldn't log into any desktop at all anymore. I would type in my password and press enter and it would just sit there doing nothing. The computer no longer responded in any way, so I had to hold the power button and reboot. The same problem happened repeatedly. Earlier today, I tried to get on again. I found that I hadthe same problem, when I tried to log in, the computer no longer locked up, but instead flashed a black screen with theconsole output and what seemed to be an error message before returning to the log in screen. It was to quick for me to read, about 1/4-1/2 second. I'd really appreciate some help as I have some important files that are not backed up yet. I can't transfer the files to a new account, or even make a new account because I tried taking the password off my account so now I can't authenticate from the guest to perform root functions. I'd really appreciate some help as I have some important files that are not backed up yet. Thanks.

    Read the article

  • AngularJS dealing with large data sets (Strategy)

    - by Brian
    I am working on developing a personal temperature logging viewer based on my rasppi curl'ing data into my web server's api. Temperatures are taken every 2 seconds and I can have several temperature sensors posting data. Needless to say I will have a lot of data to handle even within the scope of an hour. I have implemented a very simple paging api from the server so the server doesn't timeout and is currently only returning data in 1000 units per call, then paging through the data. I had the idea to intially show say the last 20 minutes of data from a sensor (or all sensors depending on user choices), then allowing the user to select other timeframes from which to show data. The issue comes in when you want to view all sensors or an extended time period (say 24 hours). Is there a best practice of handling this large amount of data? Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours? I haven't been able to find a decent idea of this in use yet even though there are a lot of ways to take this problem. I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side (as beyond a week of data this might not be feasible).

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >