Search Results

Search found 1605 results on 65 pages for 'monitors'.

Page 59/65 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • A story of Murphy&ndash;my technical issues at TechDays Switzerland #chtd

    - by Laurent Bugnion
    I had two sessions at the recent Swiss TechDays. While the first one (Advanced Development for Windows Phone 8) went extremely well (I think), I had a very annoying technical issue in the beginning of my second session. First let me add that I talked to Microsoft about that and I hope they will change a few things in the room assignment for next year. My two sessions were one right after the other, with only 15 minutes break to change room. I don’t mind having two sessions so close from each other, but I would really like them to be in the same room in order to avoid having to move my laptops (plural, that will become important later) and redoing the tech check. That being said, I am guilty of not checking where my talks would be before the day before the conference, and when I did notice, it was too late to change it. After my first session, I quickly moved to the other room and setup my main laptop, a Dell Precision. We tested the video output (VGA) and didn’t notice anything special. The projectors are using a fairly high resolution (kudos to the Basel conference center for not having old school 1024x768 projectors anymore, that makes Blend really hard to demo ;) but since everything went great during the first talk, I was not worried. In fact I even had some time to chat with some early attendees about my Microsoft Surface and the Samsung Slate 7, which I had carried with me in addition to the Precision. I just thought it would be nice to show the hardware that Windows 8 can run on, without thinking any further. When the session started, I immediately noticed that the main screen was not showing anything. I thought I had just forgotten to switch to “duplicate” for the video output, and did that with a quick Win-P. However it didn’t “hold”. After 2 seconds, it reverted back to a black display for my attendees. Then I started to really worry. We tried everything, switching from VGA to HDMI, changing the resolution, setting the projector as primary display, but nothing did the trick. The projector was just refusing to show my screen. Now, to show you how despaired I started to be, I even considered using the “extend” setting (which worked just fine), and to use one of the feedback monitors on the floor but really it was super cumbersome. Eventually, my last resort arrived: I started my Samsung Slate 7, which by chance has Visual Studio 12 and Blend 5 installed, plugged the HDMI projector in the dock (yes, I had the dock with me, which I usually don’t!), connected it to internet (had to enter a long password for that), loaded the source code from my main machine using a USB stick and…. finally started to give my presentation. All in all I think we lost about 10 minutes. Amongst the most horrible minutes of my whole life, truly (yes I am blessed, I didn’t have that many horrible minutes in my life ;) I really want to apologize to my attendees. We joked a bit during the attempts to resolve the issue, the reactions I had after the session were all very nice and sympathetic. Only a handful of people left my session while I was having the issues, and I really don’t blame them (who knew how long the problem would last!!). But still, I probably talked at more than 60 sessions over the years, and this was by far my most painful moment. What did I learn? So what did I learn from this? Well from now on I will always have my slate ready with the latest source code, internet connection and every tool I might need during the presentation. This way, if I detect even a hint that the Precision might not work, I will just switch to the Slate. The experience of presenting on the slate is actually not bad at all, it is just a bit slow for my taste, but it does work. By the way, I will be posting the code and slides for my sessions very soon, I just need to “clean it and zip it”. Stay tuned, and thanks again for your patience in that horrible circumstance. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • Snap App Windows to Pre-Defined Screen Sections with Acer GridVista

    - by Asian Angel
    The window snapping feature in Windows 7 and the ability to organize monitor(s) into specific gridded sections have both become popular lately. If you love the idea of having both combined in a single software then join us as we look at Acer GridVista. Note: Acer GridVista works with Windows XP, Vista, & 7. It will also work with dual monitors. Setup Acer GridVista comes in a zip file format and at first you might assume that it is portable in nature but it is not. Once you unzip the enclosed folder you will need to double click on “Setup.exe” to install the program. Acer GridVista in Action Once you have installed the program and started it up all that you will notice at first is the new “System Tray Icon”. Here you can see the “Context Menu”… The only menu command that you will likely use most of the time is the “Grid Configuration Command”. Notice that for our single monitor setup that it lists “Display 1”. The “Single Setting” is enabled by default and you can easily choose the layout that best suits your needs. The enabled layout style will always be highlighted in yellow for easy reference. For our example we chose the “Triple (primary at right)” layout style. Each section will be specifically numbered as shown here. Do not worry…the grid and numbers only appear for a moment and then become invisible again until you move an app window into that section/area of your screen. On every regular app window that you open you will notice three new buttons in the upper right corner. Here is what each of these new buttons do: Acer GridVista Extensions (Transparent, Send To Window Grid, About Acer GridVista): Viewable in a drop-down menu Lock To Grid (Enable/Disable): Enabled by default –> Note: Set to disable on a particular window to keep it free of the “grid locking function” Always On Top (Enable/Disable): Disabled by default A good look at the “Extensions Drop-Down Menu” where you can set an app window to be transparent or send it to a specific screen section on your monitor(s). If you open an app it will not automatically lock into a specific section. To lock the window into a specific section drag-and-drop the app window into the desired section. Notice the red outline and highlighted number on “Section 2” below. The red outline and highlighted number serves as an indicator that if you release the app window at that moment it will lock into the outlined/highlighted section. Now that Notepad is locked into “Section 2” you can see that it is maximized within that section. Continue to drag-and-drop your app windows into the appropriate sections as desired…apps can still be reduced to the “Taskbar” the same as before. Options These are the options available for Acer GridVista… Conclusion If you have been wanting the ability to “snap” windows and organize them into specific screen areas then Acer GridVista is definitely a program that you should try out. Links Download Acer GridVista at Softpedia View detailed information at the Acer GridVista Homepage Similar Articles Productive Geek Tips Multitask Like a Pro with AquaSnapHelp Troubleshoot the Blue Screen of Death by Preventing Automatic RebootAdd Windows 7’s AeroSnap Feature to Vista and XPResize Windows to Specific Dimensions Easily With SizerKeyboard Ninja: Assign a Hotkey to any Window TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Playing Games In Chrome Made Easier Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • How To Create Your Own x86 Operating System for Modern PC Computers

    - by mudge
    I'd like to create a new operating system for x86 PC computers. I'd like it to be 64-bit but possibly run as 32-bit as well. I have these kinds of questions: What kinds of things do you start working on first? Knowing where to start in writing your own operating system seems to me to be a tricky subject, so I am interested in your input. Generally how to go about making your own 32-bit/64-bit operating system, or good resources that mention useful information about going about writing your own operating system for x86 computers. I don't care how old sources are as long as they are still relevant and useful to what I am doing. I know that I will want it to have kernel drivers that access peripheral hardware directly. Where should I look for advice and documentation for programming and understanding the interface to peripheral hardware the operating system will communicate with? I will need to understand how the operating system will receive input and interact with keyboards, mice, computer monitors, hard drives, USB, etc. etc. This is probably the area I know least about. I have the Intel instruction set manuals and have been getting more familiar with assembly programming, so the CPU side of things is what I know the most about. At this point I'm thinking that I'd like to implement the Linux system calls within my operating system so that programs that run on Linux can run on my operating system. I want my operating system to use the ELF binary format. I wonder what obstacles I have to overcome to achieve this Linux compatibility. Are the main things implementing the system calls that Linux provides, and using the ELF format? What else? I am also interested in people's thoughts about why it might not be a good idea to make your own operating system, and why it is a good idea to make your own operating system. Thank you for any input.

    Read the article

  • Is background tasks the solution for this problem?

    - by Trinca
    Hi. I need to develop an enterprise app that monitors the network traffic. Basically it detects if the user is in wi-fi or cellular data and save the amount of bytes was sent and received in a period of time. I saw an App at the AppStore that do exactly this job. Detecting wi-fi or cellular data is quite simple using the Reachability Sample provided by Apple. My problem is to keep monitoring the bytes sent and received while the app is in background. As it is an enterprise App, I used UIBackgroundModes "voip" to avoid the app to be terminated. I also installed the setKeepAliveTimeout method and I'm able to see the logs each 10 minutes, BUT only for 10 seconds after the method runs. I mean, setKeepAliveTimeout brings my App to run a Timer for 10 seconds each 1o minutes. I'm thinking wether or not a task in background is the best solution for my problem. I'll appreciate any comments. EDIT: Ok guys. Thats the perfect way to do it. First of all you must read this: http://www.christian-fries.de/blog/files/tag-ios.html I tried this and it works really fine. All we need to do is to create a second thread detached from the main one. This way we have a continuos threading running forever. You must see the GCD docs at Apple's website also. Second thing you should consider for an enterprise App is to set it up as a voip App, this way iOS will put your App running even after a reboot. It's a special behavior iOS has to keep voip Apps running. Thats it guys. I hope it can help you.

    Read the article

  • BindAttribute, Exclude nested properties for complex types

    - by David Board
    I have a 'Stream' model: public class Stream { public int ID { get; set; } [Required] [StringLength(50, ErrorMessage = "Stream name cannot be longer than 50 characters.")] public string Name { get; set; } [Required] [DataType(DataType.Url)] public string URL { get; set; } [Required] [Display(Name="Service")] public int ServiceID { get; set; } public virtual Service Service { get; set; } public virtual ICollection<Event> Events { get; set; } public virtual ICollection<Monitor> Monitors { get; set; } public virtual ICollection<AlertRule> AlertRules { get; set; } } For the 'create' view for this model, I have made a view model to pass some additional information to the view: public class StreamCreateVM { public Stream Stream { get; set; } public SelectList ServicesList { get; set; } public int SelectedService { get; set; } } Here is my create post action: [HttpPost] [ValidateAntiForgeryToken] public ActionResult Create([Bind(Include="Stream, Stream.Name, Stream.ServiceID, SelectedService")] StreamCreateVM viewModel) { if (ModelState.IsValid) { db.Streams.Add(viewModel.Stream); db.SaveChanges(); return RedirectToAction("Index", "Service", new { id = viewModel.Stream.ServiceID }); } return View(viewModel); } Now, this all works, apart from the [Bind(Include="Stream, Stream.Name, Stream.ServiceID, SelectedService")] bit. I can't seem to Include or Exclude properties within a nested object.

    Read the article

  • Are web-safe colors still relevant?

    - by Gavin Miller
    Since the vast majority of monitors are 16-bit color or more, including mobile devices, does it make sense to even consider web-safe colors when choosing color schemes? Or is it something that ought to be relegated to history as a piece of trivia? For those of you that don't know what web-safe colors are: Another set of 216 color values is commonly considered to be the "web-safe" color palette, developed at a time when many computer displays were only capable of displaying 256 colors. A set of colors was needed that could be shown without dithering on 256-color displays; the number 216 was chosen partly because computer operating systems customarily reserved sixteen to twenty colors for their own use; it was also selected because it allows exactly six shades each of red, green, and blue (6 × 6 × 6 = 216). The list of colors is often presented as if it has special properties that render them immune to dithering. In fact, on 256-color displays applications can set a palette of any selection of colors that they choose, dithering the rest. These colors were chosen specifically because they matched the palettes selected by the then leading browser applications. [Wikipedia]

    Read the article

  • In C#, what thread will Events be handled in?

    - by Ben
    Hi, I have attempted to implement a producer/consumer pattern in c#. I have a consumer thread that monitors a shared queue, and a producer thread that places items onto the shared queue. The producer thread is subscribed to receive data...that is, it has an event handler, and just sits around and waits for an OnData event to fire (the data is being sent from a 3rd party api). When it gets the data, it sticks it on the queue so the consumer can handle it. When the OnData event does fire in the producer, I had expected it to be handled by my producer thread. But that doesn't seem to be what is happening. The OnData event seems as if it's being handled on a new thread instead! Is this how .net always works...events are handled on their own thread? Can I control what thread will handle events when they're raised? What if hundreds of events are raised near-simultaneously...would each have its own thread? Thank in advance! Ben

    Read the article

  • ScriptingBridge causes iTunes to relaunch after quit

    - by Justin Voss
    I'm working on a Cocoa app that monitors what you're listening to in iTunes, and since I'm targeting Mac OS 10.5 and higher, I've decided to use Scripting Bridge. If I try to close iTunes too close to the time that my app polls it for the current track, iTunes will immediately relaunch! The only way to reliably prevent this behavior is to quit my app first, then quit iTunes. Switching to EyeTunes solves the problem, but it's a fairly old codebase and I was hoping that I could accomplish this without an external library. Surely I'm doing something wrong that's causing the relaunch? Here's some sample code; this snippet is run every few seconds, triggered by an NSTimer. #import "iTunesBridge.h" // auto-generated according to Apple's docs -(void)updateTrackInfo { iTunesApplication *iTunes = [[SBApplication alloc] initWithBundleIdentifier:@"com.apple.iTunes"]; iTunesTrack *currentTrack = [iTunes currentTrack]; // inspect currentTrack to determine what's being played... [iTunes release]; } Is this a known issue with Scripting Bridge, or am I using it incorrectly?

    Read the article

  • Windows Screensaver Multi Monitor Problem

    - by Bryan
    I have a simple screensaver that I've written, which has been deployed to all of our company's client PCs. As most of our PCs have dual monitors, I took care to ensure that the screensaver ran on both displays. This works fine, however on some systems, where the primary screen has been swapped (to the left monitor), the screensaver only works on the left (primary) screen. The offending code is below. Can anyone see anything I've done wrong, or a better way of handling this? For info, the context of "this", is the screensaver form itself. // Make form full screen and on top of all other forms int minY = 0; int maxY = 0; int maxX = 0; int minX = 0; foreach (Screen screen in Screen.AllScreens) { // Find the bounds of all screens attached to the system if (screen.Bounds.Left < minX) minX = screen.Bounds.Left; if (screen.Bounds.Width > maxX) maxX = screen.Bounds.Width; if (screen.Bounds.Bottom < minY) minY = screen.Bounds.Bottom; if (screen.Bounds.Height > maxY) maxY = screen.Bounds.Height; } // Set the location and size of the form, so that it // fills the bounds of all screens attached to the system Location = new Point(minX, minY); Height = maxY - minY; Width = maxX - minX; Cursor.Hide(); TopMost = true;

    Read the article

  • For the professional programmers - do you still write code for fun at home ? [closed]

    - by Led
    Possible Duplicate: Do you ever code just for fun? I've been working as a 'professional' coder for about 11 years. (I've just turned 33.) When I talk to my collegues, I find that most of them actually don't program any more in their spare time - 8 (or 10 :)) hours a day at their job is enough for them. A difference between me and them might be that I was always programming for fun (demoscene stuff etc.) which is why I got into the field, while most of them picked up programming later on (at university or whatever). When I get home my head is always full of ideas, so usually I have a hobby-project going on. Is it weird to spend 8 hours a day programming, and then get home, have dinner, and do some more ? For me the reasons are just - ideas : trying stuff - wanting to develop something all by myself, so when it's finished I can claim it as my own victory How about you ? And if you do, do you have other reasons to do so ? Edit: And if you've got sparetime projects, it might be fun to tell us a bit about it :) Spamming a link to your site/hobbyproject won't be frowned upon here ! Edit2: Vote for this if you want to encourage companies to make monitors that'll give you a nice tan ! ;-)

    Read the article

  • Our GUI Situation

    - by shawn-harrison
    These days, any decent Windows desktop application must perform well and look good under the following conditions: 1) XP and Vista and Windows 7. 2) 32 bit and 64 bit. 3) With and without Themes. 4) With and without Aero. 5) At 96 and 120 and perhaps custom DPIs. 6) One or more monitors (screens). 7) Each OS has it's own preferred Font. Oh My! What is a lowly little Windows desktop application developer to do :(. I'm hoping to get a thread started with suggestions on how to deal with this Gui dilemma. First off, I'm on Delphi 7. a) Does Delphi 2010 bring anything new to the table to help with this situation? b) Should we pick an aftermarket component suite and rely on them to solve all these problems? c) Should we go with an aftermarket skinning engine? d) Perhaps a more html type gui is the way to go. Can we make a relatively complex gui app with html that doesn't require using a browser? (prefer to keep it form based) e) Should we just knuckle down and code through each one of these scenarios and quit bitching about it? f) And finally, how in the world are we supposed to test all these conditions? thanks, shawnH

    Read the article

  • LIBPCAP and WIRESHARK Capture on PPP

    - by user655629
    Hi, I have written a small bridge program using LIBPCAP API. I have installed Winpcap 3.1 Beta for support in order to capture from a PPP interface. What i do is, I capture from the PPP interface through my LIBPCAP program and send the traffic to another Ethernet interface in my computer. Then i connect this Ethernet Interface to another Ethernet Interface at another computer where i monitor it through Wireshark. So in short my PPP-Ethernet Bridge is on computer 1. And Another computer2 directly connected to computer1 on Ethernet monitors the incoming traffic from the bridge through wireshark. The problem i face is that when i capture PPP traffic through wireshark in computer1, i see reasonable delay between the packets. But when i use my LIBPCAP program to capture and relay traffic and check the traffic on computer 2 using Wireshark it gives jumps of 0.5seconds delay after some packets. This is quite unexplainable to me. I dont understand how wireshark PPP direct capture on computer 1 does not give delay and LIBPCAP program is giving delay. I have checked my bridge for Ethernet to Ethernet relaying and there is no delay like the one i am experiencing in case of PPP-Ethernet. a higher delay between packets is acceptable but such a BIG delay after a couple of packets is unacceptable. Please help if you can. Best Regards FIKA

    Read the article

  • Uninterruptible Windows Process

    - by Nullstr1ng
    Hi Guys, We're starting a new custom project right now from a client and one of the requirements is the process cannot be terminated unless the system is shutting down, restarting, or logging-off. This application monitors the USB interface. We will be using WMI to query the device periodically. The client want's to run the application on Windows XP Operating System and doesn't like installing .NET. So we targeted Visual Basic 6 as our language. My main concern is this application cannot be terminated. Our Project Adviser talks about Anti-virus and yes, some of the anti virus cannot be terminated. I was thinking how to do the same in Visual Basic 6. I know there will be API involved on the project but where should I go? so API is ok with me. I saw some articles that converts the EXE to a SERVICE, create Windows Service in Visual Basic 6, etc. So please .. share your thoughts.

    Read the article

  • Use of Syntactic Sugar / Built in Functionality

    - by Kyle Rozendo
    I was busy looking deeper into things like multi-threading and deadlocking etc. The book is aimed at both pseudo-code and C code and I was busy looking at implementations for things such as Mutex locks and Monitors. This brought to mind the following; in C# and in fact .NET we have a lot of syntactic sugar for doing things. For instance (.NET 3.5): lock(obj) { body } Is identical to: var temp = obj; Monitor.Enter(temp); try { body } finally { Monitor.Exit(temp); } There are other examples of course, such as the using() {} construct etc. My question is when is it more applicable to "go it alone" and literally code things oneself than to use the "syntactic sugar" in the language? Should one ever use their own ways rather than those of people who are more experienced in the language you're coding in? I recall having to not use a Process object in a using block to help with some multi-threaded issues and infinite looping before. I still feel dirty for not having the using construct in there. Thanks, Kyle

    Read the article

  • npgsql Leaking Postgres DB Connections: Way to monitor connections?

    - by Alan
    Background: I'm moving my application from npgsql v1 to npgsql v2.0.9. After a few minutes of running my application, I get a System.Exception: Timeout while getting a connection from the pool. The web claims that this is due to leaking connections (opening a db connection, but not properly closing them). So I'm trying to diagnose leaking postgres connections in npgsql. From the various web literature around; one way to diagnose leaking connections is to setup logging on npgsql, and look for the leaking connection warning message in the log. Problem is, I'm not seeing this message in the logs anywhere. I also found utility that monitors npgsql connections, but it's unstable and crashes. So I'm left manually inspecting code. For everyplace that creates an npgsql connection, there is a finally block disposing of it. For everyplace that opens a datareader, CommandBehavior.CloseConnection is used (and the datareader is disposed). Any other places to check or can someone recommend a way to look for leaking pool connections?

    Read the article

  • Can Bonjour browse a service with a particular name?

    - by Roman
    Bonjour provides "DNSSD.browse(serviceType,callBackObject)" method which browses for services of a particular type. If a service of the given type is found, Bonjour call "callBackObject.serviceFound". If the service is lost, Bonjour calls "callBackObject.serviceLost". I alway considered "DNSSD.browse" as a method for monitoring a particular service. Bonjour monitors a particular service and calls necessary method if the service is found (available) or lost (not available). But than I realized that "DNSSD.browse" receives (as argument) a type of service (for example "http.tcp") and there can be several services of this type. So, its probably calls "serviceFound" and "serviceLost" if any service of the specified type is found or lost, respectively. But in my application I would like to browse just for one particular service. What is the best way to do it? I have two potential solutions: When I register a service, I give it a unique type. For example: "server1.http.tcp". I register services with unique names (not types) and ask Bonjour to browse for services with particular names. But I am not sure that Bonjour provide such possibility. Can it browse for services with specific names?

    Read the article

  • Achieving Thread-Safety

    - by Smasher
    Question How can I make sure my application is thread-safe? Are their any common practices, testing methods, things to avoid, things to look for? Background I'm currently developing a server application that performs a number of background tasks in different threads and communicates with clients using Indy (using another bunch of automatically generated threads for the communication). Since the application should be highly availabe, a program crash is a very bad thing and I want to make sure that the application is thread-safe. No matter what, from time to time I discover a piece of code that throws an exception that never occured before and in most cases I realize that it is some kind of synchronization bug, where I forgot to synchronize my objects properly. Hence my question concerning best practices, testing of thread-safety and things like that. mghie: Thanks for the answer! I should perhaps be a little bit more precise. Just to be clear, I know about the principles of multithreading, I use synchronization (monitors) throughout my program and I know how to differentiate threading problems from other implementation problems. But nevertheless, I keep forgetting to add proper synchronization from time to time. Just to give an example, I used the RTL sort function in my code. Looked something like FKeyList.Sort (CompareKeysFunc); Turns out, that I had to synchronize FKeyList while sorting. It just don't came to my mind when initially writing that simple line of code. It's these thins I wanna talk about. What are the places where one easily forgets to add synchronization code? How do YOU make sure that you added sync code in all important places?

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • How should developers cope with so many GUI configuration combinations?

    - by shawn-harrison
    These days, any decent Windows desktop application must perform well and look good under the following conditions: XP and Vista and Windows 7. 32 bit and 64 bit. With and without Themes. With and without Aero. At 96 and 120 and perhaps custom DPIs. One or more monitors (screens). Each OS has its own preferred font. Oh my! What is a lowly little Windows desktop application developer to do? :( I'm hoping to get a thread started with suggestions on how to deal with this GUI dilemma. First off, I'm on Delphi 7. a) Does Delphi 2010 bring anything new to the table to help with this situation? b) Should we pick an aftermarket component suite and rely on them to solve all these problems? c) Should we go with an aftermarket skinning engine? d) Perhaps a more HTML-type GUI is the way to go. Can we make a relatively complex GUI app with HTML that doesn't require using a browser? (prefer to keep it form based) e) Should we just knuckle down and code through each one of these scenarios and quit bitching about it? f) And finally, how in the world are we supposed to test all these conditions?

    Read the article

  • Vim: yank and replace -- the same yanked input -- multiple times, and two other questions

    - by Hassan Syed
    Now that I am using vim for everything I type, rather then just for configuring servers, I wan't to sort out the following trivialities. I tried to formulate Google search queries but the results didn't address my questions :D. Question one: How do I yank and replace multiple times ? Once I have something in the yank history (if that is what its called) and then highlight and use the 'p' char in command mode the replaced text is put at the front of the yank history; therefore subsequent replace operations do not use the the text I intended. I imagine this to be a usefull feature under certain circumstances but I do not have a need for it in my workflow. Question two: How do I type text without causing the line to ripple forward ? I use hard-tab stops to allign my code in a certain way -- e.g., FunctionNameX ( lala * land ); FunctionNameProto ( ); When I figure out what needs to go into the second function, how do I insert it without move the text up ? Question three Is there a way of having a uniform yank history across gvim instances on the same machine ? I have 1 monitors. Just wondering, atm I am using highlight + mouse middle click.

    Read the article

  • Can you open a form or window in an Outlook Addin (VSTO)

    - by dontpanic
    Hi, I am new to VSTO programming. I have created a basic addin for Outlook 2007 that monitors a folder containing XML text files which it opens and then sends them as an email, then deletes them. this all works fine. I want the user to be able to configure certain settings for the way the addin/program will operate, such as the folder that it will monitor, and other things. The logical way to do this is to create a menu item in the addin (which I have also done) that opens a windows form (or XAML window) that allows them to enter the parameters. In my addin I added a new item Windows Form, which worked, and the designer opened. However, in my addin code I cannot open the form. The Show() method normally associated with form objects is not available. Is this simply something you cannot do, or am I just doing it the wrong way? I have read about Outlook form regions, but these seemed to be attached to outlook items such as a new email, task, appointment etc... there doesnt seem to be a way to create a form region that can be opened in the main window of Outlook. Ideally, I would like to go with my original method of opening a new window from a menu item, but if this isnt possible I would like to hear other solutions. Thanks, Will.

    Read the article

  • G++ Multi-platform memory leak detection tool

    - by indyK1ng
    Does anyone know where I can find a memory memory leak detection tool for C++ which can be either run in a command line or as an Eclipse plug-in in Windows and Linux. I would like it to be easy to use. Preferably one that doesn't overwrite new(), delete(), malloc() or free(). Something like GDB if its gonna be in the command line, but I don't remember that being used for detecting memory leaks. If there is a unit testing framework which does this automatically, that would be great. This question is similar to other questions (such as http://stackoverflow.com/questions/283726/memory-leak-detection-under-windows-for-gnu-c-c ) however I feel it is different because those ask for windows specific solutions or have solutions which I would rather avoid. I feel I am looking for something a bit more specific here. Suggestions don't have to fulfill all requirements, but as many as possible would be nice. Thanks. EDIT: Since this has come up, by "overwrite" I mean anything which requires me to #include a library or which otherwise changes how C++ compiles my code, if it does this at run time so that running the code in a different environment won't affect anything that would be great. Also, unfortunately, I don't have a Mac, so any suggestions for that are unhelpful, but thank you for trying. My desktop runs Windows (I have Linux installed but my dual monitors don't work with it) and I'd rather not run Linux in a VM, although that is certainly an option. My laptop runs Linux, so I can use that tool on there, although I would definitely prefer sticking to my desktop as the screen space is excellent for keeping all of the design documentation and requirements in view without having to move too much around on the desktop. NOTE: While I may try answers, I won't mark one as accepted until I have tried the suggestion and it is satisfactory. EDIT2: I'm not worried about the cross-platform compatibility of my code, it's a command line application using just the C++ libraries.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65  | Next Page >