Search Results

Search found 14292 results on 572 pages for 'high integrity systems'.

Page 140/572 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • How To Make Moving News Bar in C# Desktop Application without Timer

    - by Ehab Sutan
    Hello, I'm making a desktop application in C# which contains a moving News Bar labels. I'm using a timer to move these labels but the problem is that when i make the interval of this timer low (1-10 for example) the application takes very high percentage of CPU Usage, And when i make it higher(200 -500 ) the movement of the labels becomes intermittent or not smooth movement even that the user may not be able to read the news in Comfortable way. Question is : Is there better way to make news Bar moving in a good manner without consuming high resources. Thanks.

    Read the article

  • LuaInterface and 64Bit

    - by Skintkingle
    Ok i'm currently using LuaScript v5.1 in a Game engine i'm using. and the handy LuaInterface that comes along with it. i've tested it on a range of systems running a range of OS's. LuaInterface seems to fail on 64bit Operating Systems. Could anyone point me to a 64bit Compiled LuaInterface.dll, or is there any alternative to the LuaInterface that can be used, because LuaInterface is extremely useful and i dont think i would be able to write a more extensive interface by myself using lua51. (I'm not that good, sadly) Any help or links would be greatly appreciated. Thanks alot guys!

    Read the article

  • Memory mapped files causes low physical memory

    - by harik
    I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc. How do I trigger or tell the system to swap the memory to pagefile and free physical memory? I'm using Windows XP. If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system. To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less. Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :( Any help is appreciated. Thanks.

    Read the article

  • please explain NHibernate HiLo

    - by Ben
    I'm struggling to get my head round how the HiLo generator works in NHibernate. I've read the explanation here which made things a little clearer. My understanding is that each SessionFactory retrieves the high value from the database. This improves performance because we have access to IDs without hitting the database. The explanation from the above link also states: For instance, supposing you have a "high" sequence with a current value of 35, and the "low" number is in the range 0-1023. Then the client can increment the sequence to 36 (for other clients to be able to generate keys while it's using 35) and know that keys 35/0, 35/1, 35/2, 35/3... 35/1023 are all available. How does this work in a web application as don't I only have one SessionFactory and therefore one hi value. Does this mean that in a disconnected application you can end up with duplicate (low) ids in your entity table? In my tests I used these settings: <id name="Id" unsaved-value="0"> <generator class="hilo"/> </id> I ran a test to save 100 objects. The IDs in my table went from 32768 - 32868. The next hi value was incremented to 2. Then I ran my test again and the Ids were in the range 65536 - 65636. First off, why start at 32768 and not 1, and secondly why the jump from 32868 to 65536? Now I know that my surrogate keys shouldn't have any meaning but we do use them in our application. Why can't I just have them increment nicely like a SQL Server identity field would. Finally can someone give me an explanation of how the max_lo parameter works? Is this the maximum number of low values (entity ids in my head) that can be created against the high value? This is one topic in NHibernate that I have struggled to find documentation for. I read the entire NHibernate in action book and it still doesn't go into how this works in any detail. Thanks Ben

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by Emtucifor
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running as doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for the web server so it is "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • shell script output in html + email that html

    - by Kimi
    Using Solaris I have a monitoring script that uses other scripts as plugins. Theses pugins are also scripts which work in difffernt ways like: 1. Sending an alert while high memory uilization 2. High Cpu usage 3. Full disk Space 4. chekcking the core file dump Now all this is dispalyed on my terminal and I want to put them in a HTML file/format and send it as a body of the mail not as attachment. Thanks .

    Read the article

  • Soon to be PhD in Computer Science - Which Path to Follow?

    - by mttr
    I am going to submit my PhD thesis within the next six months. My PhD is on managing the availabiity of large-scale distributed systems, so I have some experience actually building non-trivial systems (+ I have four years experience working as a programmer). I am now trying to figure out what I should do following the PhD. I enjoy research (a quick definition: identify problem, come up with solution, ask interesting questions, find ways to answer them, build system, experiment, contribute some new knowledge and publish). I also like teaching and supervising students. It would seem that a career in academia is the ideal thing to do (can work on non-trivial problems and contribute something of use to some or more people). However, a career in academia has two significant drawbacks. First, it can be difficult to gain access to real systems with real users which then display real problems. This creates the danger that you do work that seems important (to you and maybe to some of your colleagues), but is not really relevant to anything or anyone. Second, the pay is pretty sad. Apparently, you have to sacrifice this for the privilege of doing research. I enjoy programming, but don't just want to hack some web-based system for the rest of my life. That is, working in IT for a bank is not a future I see myself enjoying. I want to work on interesting problms (that's difficult to define clearly): things where you don't know how to start, that take some time to figure out and attack, that require a rigorous approach to demonstrate that the problem has been solved, and problems that need a solution in the real world. Give the experience of people on stackoverflow, what do you think suitable options are and why (or alternatively, what gaps in my thinking does the above reveal)? Is industrial research (aka IBM Research, Microsoft Research) the only alternative avenue to a career in academia? What other areas, companies, occupations, etc. could provide me with stimulating, inspiring work? Which regions, countries am I most likely to find such work? Please share your experience.

    Read the article

  • What happens after a packet is captured?

    - by Rayne
    Hi all, I've been reading about what happens after packets are captured by NICs, and the more I read, the more I'm confused. Firstly, I've read that traditionally, after a packet is captured by the NIC, it gets copied to a block of memory in the kernel space, then to the user space for whatever application that then works on the packet data. Then I read about DMA, where the NIC directly copies the packet into memory, bypassing the CPU. So is the NIC - kernel memory - User space memory flow still valid? Also, do most NIC (e.g. Myricom) use DMA to improve packet capture rates? Secondly, does RSS (Receive Side Scaling) work similarly in both Windows and Linux systems? I can only find detailed explanations on how RSS works in MSDN articles, where they talk about how RSS (and MSI-X) works on Windows Server 2008. But the same concept of RSS and MSI-X should still apply for linux systems, right? Thank you. Regards, Rayne

    Read the article

  • Remote C++ Development using SSH only inside Eclipse Environment

    - by EFreak
    How do you integrate Remote Systems Explorer and CDT plugin inside eclipse ? What I mean is that you can use Remote Systems Explorer (RSE) plugin to work on C++ code on a remote linux box inside Eclipse but when you try to compile, you basically run a shell command through SSH. The CDT plugin is unable to locate the remote system and off course the remote compiler. Is there a way to integrate both the plugins so that we can use the parsing / suggestion features of CDT for the remote system as well; and also features like remote compilation, remote debugging using SSH only. If this is not possible, then what is the closest open source alternative to the above problem.

    Read the article

  • Uses of Erlang in Telecom

    - by user94154
    I'm a web developer and a college student majoring in telecommunications. This means I'm decent at programming and I know a little about telecom networks (at a high, non-technical level). I keep reading that Erlang is used all over the telecom industry (supposedly for its performance). I'm wondering if there's anyway I can combine my programming skills with my telecommunications major with Erlang. Is most of the Erlang/telecom stuff closed source? Are there any open source telecom projects written Erlang? UPDATE: sipwiz's comment makes me think in terms of a question larger than "uses of Erlang". How can I leverage a high-level understanding of telecom networks and the telecom regulatory environment with programming. I hope this hasn't veered too off-topic for SO.

    Read the article

  • Python: win32console import problem

    - by David
    I want to run wexpect (the windows port of pexpect) on my Windows 7 64-bit machine. I am getting the following error: C:\Program Files (x86)\wexpect\build\libwexpect.py Traceback (most recent call last): File "C:\Program Files (x86)\wexpect\build\lib\wexpect.py", line 97, in raise ImportError(str(e) + "This package was intended for Windows like operating systems.") ImportError: No module named win32console This package requires the win32 python packages.This package was intended for Windows like operatin g systems. In the code it is failing on the following line: from win32console import * I am using Python 2.6.4. I cannot figure out how to install win32console.

    Read the article

  • JW Player - How can I add an event listener for fullscreen toggling?

    - by Charles
    I'm using JW Player 4.5 on my site and I need to add an event listener for when fullscreen is toggled. The reason for this is to switch between a low-def version and high-def version. The default video will be the low-def version and when they switch to a fullscreen display, it will change to the high-def version. According to http://developer.longtailvideo.com/trac/wiki/Player5Events, the ViewEvent.JWPLAYER_VIEW_FULLSCREEN1 event can only be called from Actionscript. I need it to be from Javascript... Is there any way to achieve this? Can you recommend a better solution?

    Read the article

  • How to ignore/prevent javadoc folder from validation during Eclipse Build?

    - by h2g2java
    In my war is a huge javadoc folder. There is no point in validating it since javadocs are produced by Sun(Oracle) javadoc utility. I have forgotten how I did it the last time. I need to tell Eclipse build not to validate that particular folder. Reasons why I need it: 1. the html produced by Sun javadoc generation utility does not meet the requirement that Eclipse uses - there is a bug report in Eclipse but Eclipse responds that Sun javadoc generator non-compliance is not their fault and that Eclipse intends to stick to their strict compliance. Which results in lots of html errors listed in the problems tab. 2. the javadoc folder is a remote link and high activity on that link is using up my cpu resource, and because it is a link to a remote location, that cpu high activity is sustained for long time until it finishes scanning the whole 35MB javadocs. Thanks - need help.

    Read the article

  • OpenType programming

    - by Sorush Rabiee
    Hi all Recently i asked two questions (1 and 2) about using OpenType features in programs written by python and .net languages, but didn't get an answer. i realized there is no way to change text rendering engines of operating systems, or force them to use OpenType. so now want to implement my own. such a program that: provides a text engine that receives glyph shapes from otf and ttf files and renders them in sequence of glyphs in text. generates all of OTL features can be used in other parts of applications like controls and components of .NET or python GUI libraries. if python and .net languages are not suitable in this situation, aware me about other programming languages or tools. comments and answers about text rendering system of common Operating Systems, or designing text engines compatible with unicode 5.02 protocol are welcomed.

    Read the article

  • C Differences on windows and Unix OS

    - by zapping
    Is there any difference in C that is written in Windows and Unix. I teach C as well as C++ but some of my students have come back saying some of the sample programs does not run for them in Unix. Unix is alien for me. Unfortunately no experience with it whatsoever. All i know is to spell it. If there are any differences then i should be advising our department to invest on systems for Unix as currently there are no Unix systems in our lab. I do not want my students to feel that they have been denied or kept aloof from something.

    Read the article

  • Interface Builder error: IBXMLDecoder: The value for key is too large to fit into a 32 bit integer

    - by stdout
    I'm working with Robert Payne's fork of PSMTabBarControl that works with IB 3.2 (thanks BTW Robert!): http://codaset.com/robertjpayne/psmtabbarcontrol/. The demo application works fine on 64-bit systems, but when I try to open the XIB file in Interface Builder on a 32-bit system I get: IBXMLDecoder: The value (4654500848) for key (myTrackingRectTag) is too large to fit into a 32 bit integer Building the app as 32 bit works, but then running it gives: PSMTabBarControlDemo[9073:80f] * -[NSKeyedUnarchiver decodeInt32ForKey:]: value (4654500848) for key (myTrackingRectTag) too large to fit in 32-bit integer Not sure if this is a generic IB issue that can occur when moving between 64 and 32 bit systems, or if this is a more specific issue with this code. Has anyone else run into this?

    Read the article

  • Bitbanging a PIO on Coldfire/ucLinux

    - by G Forty
    Here's the problem: I need to program some hardware via 2 pins of the PIO (1 clock, 1 data). Timing constraints are tight - 10ms clock cycle time. All this, of course, whilst I maintain very high level services (CAN bus, TCP/IP). The downstream unit also ACKS by asserting a PIO pin, configured as an input, high. So this loop has to both read and write. I need to send 16 bits in the serial stream. Is there an established way to do this sort of thing or should I simply get the hardware guys to add a PIC or somesuch. I'd much prefer to avoid exotics like RTAI extensions at this stage. I did once see a reference to user-mode IO which implied a possible interrupt driven driver but lost track of it. Any pointers welcomed.

    Read the article

  • Revision histories and documenting changes

    - by jasonline
    I work on legacy systems and I used to see revision history of files or functions being modified every release in the source code, for example: // // Rev. No Date Author Description // ------------------------------------------------------- // 1.0 2009/12/01 johnc <Some description> // 1.1 2009/12/24 daveb <Some description> // ------------------------------------------------------- void Logger::initialize() { // a = b; // Old code, just commented and not deleted a = b + c; // New code } I'm just wondering if this way of documenting history is still being practiced by many today? If yes, how do you apply modifications on the source code - do you comment it or delete it completely? If not, what's the best way to document these revisions? If you use version control systems, does it follow that your source files contain pure source codes, except for comments when necessary (no revision history for each function, etc.)?

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >