Search Results

Search found 10695 results on 428 pages for 'some none'.

Page 199/428 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Problem with installing Flash

    - by Hannah
    This question has been posted loads of times, I know, but seriously I've spent hours and HOURS searching the web for answers. Anyways, basically I have just installed Ubuntu 13.04 for the first time. I tried to get on youtube and found out I cannot play videos. When I went to install flash the windows popped up saying either install Adobe flash or gnash. installed Flash from the Adobe website. Three different files types to choose from. YUM, .tar.gz or .rpm. Now from what I could gather .tar.gz is the right one so I tried it and it downloaded without the error message. It comes up with the box that says open with archive manager. Extracted the files but none of them were of any use. Could not work out how to install it. There was no file intended for installation. So what am I meant to do with these files? Would it be easier to just install Gnash?

    Read the article

  • XNA Seeing through heightmap problem

    - by Jesse Emond
    I've recently started learning how to program in 3D with XNA and I've been trying to implement a Terrain3D class(a very simple height map). I've managed to draw a simple terrain, but I'm getting a weird bug where I can see through the terrain. This bug happens when I'm looking through a hill from the map. Here is a picture of what happens: I was wondering if this is a common mistake for starters and if any of you ever experienced the same problem and could tell me what I'm doing wrong. If it's not such an obvious problem, here is my Draw method: public override void Draw() { Parent.Engine.SpriteBatch.Begin(SpriteBlendMode.None, SpriteSortMode.Immediate, SaveStateMode.SaveState); Camera3D cam = (Camera3D)Parent.Engine.Services.GetService(typeof(Camera3D)); if (cam == null) throw new Exception("Camera3D couldn't be found. Drawing a 3D terrain requires a 3D camera."); float triangleCount = indices.Length / 3f; basicEffect.Begin(); basicEffect.World = worldMatrix; basicEffect.View = cam.ViewMatrix; basicEffect.Projection = cam.ProjectionMatrix; basicEffect.VertexColorEnabled = true; Parent.Engine.GraphicsDevice.VertexDeclaration = new VertexDeclaration( Parent.Engine.GraphicsDevice, VertexPositionColor.VertexElements); foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Begin(); Parent.Engine.GraphicsDevice.Vertices[0].SetSource(vertexBuffer, 0, VertexPositionColor.SizeInBytes); Parent.Engine.GraphicsDevice.Indices = indexBuffer; Parent.Engine.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, (int)triangleCount); pass.End(); } basicEffect.End(); Parent.Engine.SpriteBatch.End(); } Parent is just a property holding the screen that the component belongs to. Engine is a property of that parent screen holding the engine that it belongs to. If I should post more code(like the initialization code), then just leave a comment and I will.

    Read the article

  • 10 Tech Products Ahead of Their Time [Video]

    - by Jason Fitzpatrick
    Sometimes a product just can’t help but be too far ahead of it’s time to be adopted. Check out these 10 products that had their moment of glory a moment (or a decade) too soon. At Mashable they’ve gathered up 10 products that hit the market too soon for people to really appreciate them. Among them, as seen in the video above, a super simple internet-focused computer. At the time it hit the market people simply didn’t get the value of having a cheap, easy to use internet terminal. It probably didn’t help much that the 1990s internet didn’t have the plethora of powerful and useful web-based applications we have now. None the less we now have tons of lightweight and “underpowered” devices focused on the internet experience (like netbooks, iPads, smart phones, chromebooks, and more). Hit up the link below to see the 9 other gems from their collection of products ahead of their times. 10 Tech Products Ahead of Their Time [Mashable] How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Logitech USB headset not working on 12.04

    - by thepeoplescoder
    I've looked around for answers to this question, But none of them seem to work for my particular problem. Obviously my USB headset isn't working, But I might as well share the scenario. I am running Ubuntu 12.04 and I have a Logitech USB headset. The relevant output from dmesg is as follows: [160708.528047] usb 2-1.2: USB disconnect, device number 9 [160768.890123] usb 2-1.2: new full-speed USB device number 10 using ehci_hcd [160768.997578] input: Logitech Logitech USB Headset as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.3/input/input15 [160768.997795] generic-usb 0003:046D:0A0C.0005: input,hidraw0: USB HID v1.00 Device [Logitech Logitech USB Headset] on usb-0000:00:1d.0-1.2/input3 When I go to System Settings - Sound, My headset shows up in both the Output and Input tabs, It is called "Clear Chat Comfort USB Headset" in those tabs, But I doubt that's a serious issue. When I click my headset in either tab to use it, The changes are not remembered when I reopen the sound settings, the built-in audio is still used. Also when I select the headset in the Output tab, and click the Test Sound button to test everything, The sound still comes from my laptop's onboard audio, instead of from my USB headset. Does anyone know how to correct this? I am stumped.

    Read the article

  • How to better integrate a unix development environment into Windows

    - by SKenz
    I'm mostly a Windows user but I do most of my development (essentially web development) using unix tools and software. I've been going back and forth between using a dedicated lubuntu virtual machine on Virtualbox and using some tools directly in windows (msgit, python, django), but none of these approaches is entirely satisfactory. I'd like to hear of ways other devs use to better integrate a unix workflow into windows. For instance tighter integration between a linux and vm and windows. The vagrant demo showed how a VM could work off of a windows project folder and I found that nice. I'd like to hear of other tools and tips that would help mimic the workflow one can find on OS X (of course I understand that it cannot be as tightly integrated on Windows as it doesn't have the same unix underpinnings). PS: I have tried cygwin as well EDIT for clarifications about What I find lacking (thanks to axblount for pointing that out) : unix tools like msys et al do not work as well as their native unic counterparts. Many scripts, installers require further configuration or do not work at all. For instance getting virtualenvwrapper to work is not very straightforward. virtualbox: ideally I would like to use windows software (photoshop, sublime text 2) seamlessly with linux. I mostly use a FTP client atm to move over files edited on the windows side which is a tedious process.

    Read the article

  • Unit and Integration testing: How can it become a reflex

    - by LordOfThePigs
    All the programmers in my team are familiar with unit testing and integration testing. We have all worked with it. We have all written tests with it. Some of us even have felt an improved sense of trust in his/her own code. However, for some reason, writing unit/integration tests has not become a reflex for any of the members of the team. None of us actually feel bad when not writing unit tests at the same time as the actual code. As a result, our codebase is mostly uncovered by unit tests, and projects enter production untested. The problem with that, of course is that once your projects are in production and are already working well, it is virtually impossible to obtain time and/or budget to add unit/integration testing. The members of my team and myself are already familiar with the value of unit testing (1, 2) but it doesn't seem to help bringing unit testing into our natural workflow. In my experience making unit tests and/or a target coverage mandatory just results in poor quality tests and slows down team members simply because there is no self-generated motivation to produce these tests. Also as soon as pressure eases, unit tests are not written any more. My question is the following: Is there any methods that you have experimented with that helps build a dynamic/momentum inside the team, leading to people naturally wanting to create and maintain those tests?

    Read the article

  • Blank screen after Switch User or Resume

    - by matt wilkie
    About half the time when I switch users or resume from standby or resume the screen goes blank (black). If I work the cursor keys I can hear the system bell when it gets to the end of the user list. I can also successfully login, going from memory, but screen stays black. Sometimes closing and re-opening the lid will light up the screen again. Pressing the special Function key to enable/disable external monitor connection has no effect [Fn]-[F5],[Fn]-[F6]. If none of the previous work I need to put the computer into hibernation or full power off to restore screen function. If I watch closely when switching users I think I can see the screen initially start to light up and then quickly fade to black. The computer is an Acer Aspire 3500, model ZL6, running Ubuntu 10.10 installed 2 days ago. No proprietary drivers are in use. I'll provide a list of hardware details as soon as I can figure out how to generate that (didn't there used to be an entry for hardware details under the System menu?). Possibly related questions: No resume after Hibernate or Standby When I resume from suspension - the screen is blank Switch user fails to complete successfully For what it's worth, blank after resume also used to happen occasionally when the laptop was running XP-Home, but nowhere near as often, perhaps 6 or 8 times a year. UPDATE: I found System Administration System Testing and ran the Monitor test. It went very very dark, but the window elements could be discerned, and the whole screen flashed (from very very dark to black). On the third repeat of that same test the screen went to full blaupck and stayed there. Moving the mouse, via touchpad, or touch keys did not wake it up again. I had to close the lid and put the computer into hibernate, and press the power button to restore it. UPDATE2: output of lshw: http://pastebin.com/q7n8676r, lspci: http://pastebin.com/6ujzVK4r UPDATE3: sometimes I can restore the screen by flipping to console 1 with ctrl-alt-F1 and then back to graphical with ctrl-alt-F7.

    Read the article

  • Clipping polygons in XNA with stencil (not using spritebatch)

    - by Blau
    The problem... i'm drawing polygons, in this case boxes, and i want clip children polygons with its parent's client area. // Class Region public void Render(GraphicsDevice Device, Camera Camera) { int StencilLevel = 0; Device.Clear( ClearOptions.Stencil, Vector4.Zero, 0, StencilLevel ); Render( Device, Camera, StencilLevel ); } private void Render(GraphicsDevice Device, Camera Camera, int StencilLevel) { Device.SamplerStates[0] = this.SamplerState; Device.Textures[0] = this.Texture; Device.RasterizerState = RasterizerState.CullNone; Device.BlendState = BlendState.AlphaBlend; Device.DepthStencilState = DepthStencilState.Default; Effect.Prepare(this, Camera ); Device.DepthStencilState = GlobalContext.GraphicsStates.IncMask; Device.ReferenceStencil = StencilLevel; foreach ( EffectPass pass in Effect.Techniques[Technique].Passes ) { pass.Apply( ); Device.DrawUserIndexedPrimitives<VertexPositionColorTexture>( PrimitiveType.TriangleList, VertexData, 0, VertexData.Length, IndexData, 0, PrimitiveCount ); } foreach ( Region child in ChildrenRegions ) { child.Render( Device, Camera, StencilLevel + 1 ); } Effect.Prepare( this, Camera ); // This does not works Device.BlendState = GlobalContext.GraphicsStates.NoWriteColor; Device.DepthStencilState = GlobalContext.GraphicsStates.DecMask; Device.ReferenceStencil = StencilLevel; // This should be +1, but in that case the last drrawed is blue and overlap all foreach ( EffectPass pass in Effect.Techniques[Technique].Passes ) { pass.Apply( ); Device.DrawUserIndexedPrimitives<VertexPositionColorTexture>( PrimitiveType.TriangleList, VertexData, 0, VertexData.Length, IndexData, 0, PrimitiveCount ); } } public static class GraphicsStates { public static BlendState NoWriteColor = new BlendState( ) { ColorSourceBlend = Blend.One, AlphaSourceBlend = Blend.One, ColorDestinationBlend = Blend.InverseSourceAlpha, AlphaDestinationBlend = Blend.InverseSourceAlpha, ColorWriteChannels1 = ColorWriteChannels.None }; public static DepthStencilState IncMask = new DepthStencilState( ) { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.IncrementSaturation, }; public static DepthStencilState DecMask = new DepthStencilState( ) { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.DecrementSaturation, }; } How can achieve this? EDIT: I've just relized that the NoWriteColors.ColorWriteChannels1 should be NoWriteColors.ColorWriteChannels. :) Now it's clipping right. Any other approach?

    Read the article

  • Oracle Delivers Special Recognition for Specialized Partners

    - by michaela.seika(at)oracle.com
    Since announcing Oracle PartnerNetwork Specialized (OPN Specialized) in October 2009, Oracle has been focused on building a program that first enables solution providers to become highly skilled Oracle partners who deliver value to customers and that then recognizes and rewards their achievements in a meaningful way. Today the company unveiled new benefits reserved for partners who have achieved one or more of the over 50 specializations currently available. The benefits demonstrate Oracle's commitment to showcase these valued partners to three key audiences: customers, other partners, and Oracle employees.With today's launch of www.oracle.com/specialized Oracle has taken what IDC believes is a first of its kind approach to putting top partners front and center with customers and prospects. While most vendors offer a business partner finder tool on their website none has gone as far as Oracle with the creation of this new site dedicated to the promotion of Specialized Partners. The tag lines - "Recognized by Oracle, Preferred by Customers" and "Specialized. Recognized. Preferred." gets right to the point - these are the solution providers with which customers should choose to engage. The contents of the page offer multiple proof points to justify the marketing phrases.One of the benefits Oracle offers its Specialized Partners is video creation and placement. While Oracle works with partners to create informal or "guerilla" videos which often are placed on YouTube to generate awareness and buzz, the company also produces professional videos for its partners. The greatest value the partner receives from this benefit isn't the non-trivial production costs that Oracle covers but instead the prominent exposure Oracle gives the finished product. Partner videos are featured on www.oracle.com/specialized, used as part of monthly OPN Specialized Partners monthly webcasts, placed on a customer facing website, the Oracle Media Network, which includes several partner sites such as PartnerCast. A solution provider gains a great deal of credibility when they can send a prospect to an Oracle website where they are featured. Read the full article here.

    Read the article

  • Add Windows 7 to boot menu

    - by Cumatru
    Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS - system restore /dev/sda2 13 4674 37436416 7 HPFS/NTFS - Windows 7 /dev/sda3 4674 58843 435116032 7 HPFS/NTFS - data storage /dev/sda4 58843 60802 15728640 83 Linux - Ubuntu 10.10 Initially i´ve installed StartUpManager. This ( i think ) added another 4 instances of Linux + memtest to my boot menu list. Altough, i din´t see any boot menu. It boots into Ubuntu after a few seconds. I´ve tried to add windows 7, but i did not succeed. This is a part of my menu.lst file. title Ubuntu 10.10, kernel 2.6.35-24-generic uuid 1c9748e2-2f11-4a6c-91c0-7310d48c4a7a kernel /boot/vmlinuz-2.6.35-24-generic root=UUID=1c9748e2-2f11-4a6c-91c0-7310d48c4a7a ro quiet splash initrd /boot/initrd.img-2.6.35-24-generic title Chainload into GRUB 2 root 1c9748e2-2f11-4a6c-91c0-7310d48c4a7a kernel /boot/grub/core.img title Ubuntu 10.10, memtest86+ uuid 1c9748e2-2f11-4a6c-91c0-7310d48c4a7a kernel /boot/memtest86+.bin menuentry “Windows 7? { set root=(hd0,2) chainloader +1 } And this is after a upgrade-grub Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-2.6.35-24-generic Found kernel: /boot/vmlinuz-2.6.35-22-generic Found GRUB 2: /boot/grub/core.img Found kernel: /boot/memtest86+.bin Updating /boot/grub/menu.lst ... done Later Edit: Ive added the following in 40_custom from /etc/grub.d/ and ive decommented hidden menu line from menu.lst, but i still cant see any boot menu. Ive also tried to press ESC and SHIFT. menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod part_msdos insmod ntfs set root='(hd0,msdos0)' chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda1)" { set root= hd(0,0) chainloader +1 } menuentry "!Windows 7 (loader) (on /dev/sda1)" { set root= hd(0,1) chainloader +1 } menuentry "!!Windows 7 (loader) (on /dev/sda1)" { set root= hd(0,2) chainloader +1 }

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with a mixed languages. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options?

    Read the article

  • Starcraft II on Wine - Ubuntu 12.04

    - by Robert Segerson
    I know this might seem like a repeat question, but I've literally looked over 100 threads on how to get SC2 to work on ubuntu 12.04 through wine and none have worked. I downloaded wine new today, and inserted my purchased SC2 disk. When I try to open the installer (installer.exe) with wine, an error appear saying: "No installer data could be found. If this problem persists, please contact Blizzard Technical Support." I searched for solutions to mediate this issue and was directed to the following source: http://ubuntuforums.org/showthread.php?t=1435314 . I followed directions through until I got to ls I tried various combinations of (ls installer.exe, ls'/home/rothic/Desktop/Installer.exe, etc.) All come back with "no such file or directory". Im not sure what to do, the next step would be to replace the "starcraft_installer" with the starcraft installer file, which Im not sure how to do (very new to linux). I tried WINEPREFIX=~/.wine_starcraft/ wine starcraft_installer and it says "wine: cannot find L"C:\windows\system32\starcraft_installer.exe" despite it being on the desktop as advised. Any suggestions on where to go from here?

    Read the article

  • Memory concerns while plotting escape from DLL Hell in Delphi

    - by Peter Turner
    I work on a program with about 50 DLLs that are loaded from one executable, it's an old organically grown program where the only rationale for creating a new DLL is that one previously didn't exist to fill a given need. (and namespaces didn't exist in Delphi so it never crossed our mind to make dll1.main.pas, dll2.main.pas or something even more unique) What we want to do is consolidate all these DLLs into one executable, since none of them are used out of the program, there shouldn't be much of a problem. The concern my boss has is that if we did this, the memory overhead for terminal server clients would go through the roof. So, I've stepped through enough initialization code to know that lots of stuff is done every time a DLL is loaded in to memory, but say I've got a project with about 4000 files, and 50 dlls, 10 of which are probably utilized by any one user in any one session of the program. The 50 dlls are about 2/3rds form files, if not more, but beyond that there's not a lot of other resources being loaded (only a few embedded pictures, icons, cursors, etc..). If I loaded all these files in to memory, how much memory is used per unit? how much is used per class? How do I keep the overhead down? and what is the biggest project one can reasonably expect to build with Delphi? This tidbit won't help answering, but I think it might clarify what my boss is worried about, we currently start our program at about 18megs, normal working conditions are usually less than 40 megs, he thinks it could climb as high as 120 megs.

    Read the article

  • Screen Resolution stuck at 640x480 after installing Bumblebee

    - by Saurabh Agarwal
    I have a Dell XPS 15z laptop. As you can see here, there are some issues with NVidia drivers. The site recommends installation of Bumblebee (instructions given in the link). I am posting it again for ease: $ sudo add-apt-repository ppa:bumblebee/stable $ sudo apt-get update && sudo apt-get upgrade $ sudo apt-get install bumblebee bumblebee-nvidia $ sudo usermod -a -G bumblebee $USER After restarting the computer however, the screen resolution was stuck at 640x480 and I got the following error message as soon as I logged in: **Could not apply the stored configuration for monitors** none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Prior to the update, the display was absolutely normal and thus there is no doubt about the cause. Albeit, there was no support for graphic drivers. In case it helps, some features of graphics drivers seem to be functional after bumblebee, ie, all features are in order except for the resolution. And if the resolution can't be fixed, please suggest a way to retract the changes so that atleast the prior state may be reachieved. Any help in the matter would be highly appreciated.

    Read the article

  • .NET Demon support for VS 11 dark theme

    - by Alex.Davies
    I'm pleased to announce that .NET Demon will be shipping simultaneously with Visual Studio 11, whenever it ends up being released. That means we're going to make sure that a version of .NET Demon is released very near to the Visual Studio 11 final release which supports the new version of VS fully. The interesting part of this support is going to be the new dark theme of VS, which I'm looking forward to using. I'm told dark colours reduce eye strain for developers. It's important that extensions like .NET Demon switch to a dark theme when the rest of the IDE changes, or the dark theme will look silly. Unfortunately, none of my favourite extensions look right in the dark theme yet, so even though I use Visual Studio 11 beta for my day-to-day development already, I can't use the dark theme. Luckily .NET Demon uses WPF throughout, and the team at Microsoft are helping us to use the WPF Style system to make it easy for me to implement the support without having to add colour attributes to all the controls manually. We should have dark theme support in .NET Demon in the next month or so.

    Read the article

  • Is mocking for unit testing appropriate in this scenario?

    - by Vinoth Kumar
    I have written around 20 methods in Java and all of them call some web services. None of these web services are available yet. To carry on with the server side coding, I hard-coded the results that the web-service is expected to give. Can we unit test these methods? As far as I know, unit testing is mocking the input values and see how the program responds. Are mocking both input and ouput values meaningful? Edit : The answers here suggest I should be writing unit test cases. Now, how can I write it without modifying the existing code ? Consider the following sample code (hypothetical code) : public int getAge() { Service s = locate("ageservice"); // line 1 int age = s.execute(empId); // line 2 return age; // line 3 } Now How do we mock the output ? Right now , I am commenting out 'line 1' and replacing line 2 with int age= 50. Is this right ? Can anyone point me to the right way of doing it ?

    Read the article

  • Inserting HTML code with jquery

    - by J. Robertson
    One of our web applications is a page that takes in a serial number and various information is returned and displayed to the user. The serial is passed via AJAX, and based on the response, one of the following can happen - An error message is shown A new form replaces the previous form Now, the way I am handling this is to use jQuery to destroy (using $.remove()) the table that displayed the initial serial form, then I'm appending another html table that contains another form. Right now I am including that additional form as part of the html source, and just setting it to display:none, then using jQuery to show it when appropriate. However, I don't like this approach because if someone views source on the page, they can see that table html code that is not being displayed. My next thought would be to use AJAX to read in another HTML file, and append it that way. However, I am trying to keep down the number of files this project uses, and since most pages in our project will use AJAX, I could see a case where there are multiple files containing HTML snippets - and that feels sloppy to me. What is the best way to handle a case where multiple html elements are being shown and removed with jQuery?

    Read the article

  • How to Hibernate from .NET Apps and How to enable Hibernate in Windows XP

    The usage of Computer desktop or laptop is increased all around the world phenomenally. This link gives you the picture on how power consumption is for various devices we use daily. to reduce the power consumption Hibernate is one of the best way provided by default in Windows Vista or Windows 7. Hibernate feature enables you to close the machine without closing your applications, that means the applications will be restored as they were once we restart the machine. Hibernate feature is not enabled in Windows XP by default. I’ve seen many people that they run (do not switch off) the machines months and months as they do not want to close the windows or applications running in Windows XP. below are the steps to enable Hibernate in Windows XP. Right click on Desktop. Click on properties. Go to screen save tab. Click on power button Select Hibernate tab Check the checkbox “Enabled Hibernate” Apply the settings. Now when you try to shutdown, “Shut down windows” dialog shows “Hibernate” options. Now you can safely close the machine without closing your applications or windows as they will be restored once you on the machine. </SPAN? Some time you might want to provide this future programmatically for the applications you develop for windows. Generally you might want to provide this option in windows applications where process needs huge time. Download managers are the one of the best example. below is the code to do a Hibernate from the .NET code. using System.Windows.Forms;namespace CodeKicks.WinApp.Machine{ public static class MyMachineHelper { public static void DoHibernate() { //Application.SetSuspendState(PowerState.Suspend, true, false); Application.SetSuspendState(PowerState.Hibernate, true, false); } }} span.fullpost {display:none;}

    Read the article

  • Cropping a line (laser beam) in XNA

    - by electroflame
    I have a laser sprite that I wish to crop. I want to crop it so when it collides with an item, I can calculate the distance between the starting point, and the ending point, and only draw that. This eliminates the "overdraw" of a laser drawing past an item. Essentially, I'm trying to crop a line, but also keep that line "attached" to the nose of my ship. The line should not be drawn past the nose of my ship, that should be the starting point. There is no rotation to worry about. Currently, I thought that doing this through SpriteBatch would be best. This is my current Spritebatch code: spriteBatch.Draw(Laser.sprite, new Rectangle((int)Laser.position.X, (int)Laser.position.Y, Laser.sprite.Width, LaserHeight), new Rectangle(0, 0, (int)(Laser.sprite.Width), LaserHeight), new Color(255, 255, 255, (byte)MathHelper.Clamp(Laser.Alpha, 0, 255)), Laser.rotation, new Vector2(Laser.sprite.Width/2, LaserHeight/2), SpriteEffects.None, 0); But this doesn't quite work. It does only draw part of the sprite, but when LaserHeight is incremented, it lengthens the line in both ways! I believe this is due to some stupid error on my part with the Origin of the draw. Quick recap: I need to have my laser sprite drawn with the bottom of it at the nose of my ship, and then use LaserHeight to crop the image so only part of it is drawn. I have a feeling my explanation is a bit...lacking. So if you require more information, please say so and I will try to provide more information. Thanks in advance!

    Read the article

  • How to set up a VirtualHost on Amazon EC2 for phpmyadmin

    - by Oudin
    Hi I'm currently working on setting up a VirtualHost on Amazon EC2 for accessing phpmyadmin so i can access it with test.example.com as oppose to it being widely available as it's default example.com/phpmyadmin. So far I've created a file "testfile" in /etc/apache2/sites-available/ with the code below and enabled it "a2ensite testfile" However I'm not getting the vhost to work <VirtualHost *:80> ServerAdmin [email protected] ServerName test.example.com ServerAlias test.example.com #DocumentRoot /usr/share/phpmyadmin DocumentRoot /home/user/public_html/folder RewriteEngine On RewriteCond %{HTTP_HOST} !test.example.com RewriteRule (.*) [L] <Directory /home/user/public_html/folder> Options FollowSymLinks DirectoryIndex index.php AllowOverride None <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_admin_flag allow_url_fopen Off php_value include_path . php_admin_value upload_tmp_dir /var/lib/phpmyadmin/tmp php_admin_value open_basedir /usr/share/phpmyadmin/:/etc/phpmyadmin/:/var/lib/phpmyadmin/ </IfModule> </Directory> # Authorize for setup <Directory /usr/share/phpmyadmin/setup> <IfModule mod_authn_file.c> AuthType Basic AuthName "phpMyAdmin Setup" AuthUserFile /etc/phpmyadmin/htpasswd.setup </IfModule> Require valid-user </Directory> # Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> ErrorLog /home/user/public_html/folder/logs/error.log LogLevel warn CustomLog /home/user/public_html/folder/logs/access.log combined </VirtualHost> sudo ln -s /usr/share/phpmyadmin /home/user/public_html/folder The above line creates a link of the phpmyadmin in the public folder. Any help on this would be greatly appreciated. Note: example.com will be replaced with my official domain

    Read the article

  • USB Device first mounted as root, then by user

    - by Petr Marek
    When I connect my Kindle, it shows up as an usb0 media, which I can read but not write (owner = root). However, if I do sudo umount /media/usb0, usb0 gets unmounted and a Kindle media gets mounted properly (is writable etc.). What can cause such strange behavior? It's not only with Kindle, but with Flash drives etc. as well. My /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda2 during installation UUID=595815c2-d882-4ec8-a2cd-cce70471167c / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda6 during installation #UUID=1340a336-66ca-4743-a6e4-41a307af2dda /boot ext4 defaults 0 3 # swap was on /dev/sda5 during installation UUID=afa49f1d-d505-4166-82a2-2f44548a48c6 none swap sw 0 0 UUID=deb86039-528a-45f3-b5f9-ce528740c94e /data_hdd ext4 defaults 0 2 My groups: petr@sova:~$ groups petr petr : petr adm cdrom sudo dip plugdev fuse lpadmin sambashare bumblebee

    Read the article

  • Is it possible to execute keyboard input programmatically in Linux?

    - by Taylor Hawkes
    For example is there a Linux command or way that I could from a program (c++ | python| or other) enter a series of keyboard inputs that are interpreted as though they are keyboard inputs. I have a bad case of Repetitive Stress Injury (RSI) from typing. To ease my pain I developed a voice controlled interface using pocket sphinx and a custom grammar and to run a number of very common commands. ex: "open chrome" , "open vim". Basically what is shown here, but with slightly diff tools: http://bloc.eurion.net/archives/2008/writing-a-command-and-control-application-with-voice-recognition/ I have run into some limitation as I can only execute command line commands given a voice command. Rather than having a "voice command" - "command line command" mapping, I would like to have "voice command" - "keyboard input" mapping. So when my active window is a browser and I type + n, and new tab opens. If I'm in vim and new vim tab opens. Any suggestions, ideas, tools or approaches to this problem would be much appreciated. I understand the answer may not be simple, but would like to develop it none the less.

    Read the article

  • How to change system time or force a time sync

    - by cpury
    My laptop is probably running out of CMOS battery, I know I have to fix it soon, but until then, this very annoying issue keeps me from using it. Scenario: My system clock is reset to 15/12/08 11:00 AM every time I turn on my computer. This has all sorts of side-effects, one of the more annoying being that I can't log into my gmail. At first I just waited for a time sync to happen, as I have that activated and all. It never happened. I googled and didn't find any way of enforcing a time sync, which I found very strange. Is there really none? Setting the time and date by hand is also a problem. For my 12.10 installation, the time & date settings are bugged. I remember it being for my last, older installation as well, though. Of course the easiest way should be to just manually edit the date and time fields by entering a new date. This is possible in theory, but the changes are reverted as soon as the text boxes loose focus. The other way to do it is to click the +-buttons for a long, long time. The first time I did that, the changes weren't stored either. I found out that afterwards I have to switch from manual to internet-sync mode and wait ~5 seconds until the in the top left corner of my system the new time is shown, or otherwise it won't have effect. So a nice solution would be one of the following: Setting the time/date by hand, maybe via terminal, so I can just enter the right values. Or, a command that would enforce an immediate time sync, that I can run after booting. I know I have to change the batteries soon, but this is seriously keeping me from working...

    Read the article

  • GLSL, is it possible to offsetting vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >