Search Results

Search found 815 results on 33 pages for 'hell awaits'.

Page 23/33 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Switch to IPv6 and get rid of NAT? Are you kidding?

    - by Ernie
    So our ISP has set up IPv6 recently, and I've been studying what the transition should entail before jumping into the fray. I've noticed three very important issues: Our office NAT router (an old Linksys BEFSR41) does not support IPv6. Nor does any newer router, AFAICT. The book I'm reading about IPv6 tells me that it makes NAT "unnecessary" anyway. If we're supposed to just get rid of this router and plug everything directly to the Internet, I start to panic. There's no way in hell I'll put our billing database (With lots of credit card information!) on the internet for everyone to see. Even if I were to propose setting up Windows' firewall on it to allow only 6 addresses to have any access to it at all, I still break out in a cold sweat. I don't trust Windows, Windows' firewall, or the network at large enough to even be remotely comfortable with that. There's a few old hardware devices (ie, printers) that have absolutely no IPv6 capability at all. And likely a laundry list of security issues that date back to around 1998. And likely no way to actually patch them in any way. And no funding for new printers. I hear that IPv6 and IPSEC are supposed to make all this secure somehow, but without physically separated networks that make these devices invisible to the Internet, I really can't see how. I can likewise really see how any defences I create will be overrun in short order. I've been running servers on the Internet for years now and I'm quite familiar with the sort of things necessary to secure those, but putting something Private on the network like our billing database has always been completely out of the question. What should I be replacing NAT with, if we don't have physically separate networks?

    Read the article

  • Move a screen session back to its original PID

    - by cron410
    Installed McMyAdmin (minecraft manager) on Ubuntu 12.04 32 bit. Wrote my own service to start McMyAdmin (.net app running in Mono) in its own screen session, and be able to inject proper McMyAdmin commands into that session with the init.d script. Its been running great! Today, I decided to start installing a Source dedicated server (counterstrike pro mod) I determine its going to be a long download process so I quit the process and fire up a fresh screen session called "source". I paste the command in, but it has a space at the begining and bash complains of ignoring semaphores or some such. I detach and reattach the session. Its sliding like butter. I ctrl+a-d out of the session and start exploring the new folder structure and figure out where I need to place a symbolic link. I go to resume that screen and this is what I see: $screen -r source There are several suitable screens on 20091.source (12/02/12 22:59:53) (Detached) 19972.source (12/02/12 22:57:31) (Detached) 917.minecraft (11/30/12 15:30:37) (Attached) It appears I am connected to the minecraft screen?!?!?! So I attach to the other screens one at a time. minecraft is running in 19972.source and sourceds is running in 20091.source So how the hell did I move the minecraft process to another session called source and my main terminal is now "attached" to the minecraft screen? more: I just used exit to quit the putty session, then logged back in, its still the same. did that 3 more times and now the minecraft screen is gone and everything is acting as it should except, of course, for the session name and start time of the "new" minecraft screen. Should I just submit this as a bug for GNU screen?

    Read the article

  • Hard crash when using bluetooth headset on MacBook Pro and Lion 10.7.2

    - by jtalarico
    I recently picked up a bluetooth headset (Motorola S10-HD) and started using it with my MacBook Pro (17" purchased new in 2010) running Lion 10.7.2. Here's what works - stereo audio: iTunes Spotify Pandora (via browser) games (e.g. Minecraft, which is a Java app) audio from YouTube Plex, VLC, other video players Here's what doesn't work well - stereo audio fails and the headset seems to go into mono (i.e. tinny-as-hell) mode: Google Hangout Skype GoTo Meeting Here's what's just downright catastrophic. If I'm listening to stereo audio and then decide to jump into a Skype call (Google Hangout, or GoTo Meeting), bluetooth often crashes and I can only get things working again by shutting down the device, disabling bluetooth, and getting things back up and running again. But the audio is still horrible, and MUCH better using just a simple set of iPhone earbuds and mic. About 80% of the time during such a call, bluetooth crashes. And about 90% of the time, after the call ends, Skype is shut down, or I try to switch back to playing stereo audio, I get a hard crash!! The gray screen of death descends and I'm told I need to restart my machine. In one such instance, even after a reboot, I could not enable bluetooth again ("Turn Bluetooth On" was grayed out in taskbar). Is this just a weak implementation of bluetooth by Apple, or is this a hardware issue? I've seen others posting similar issues even on the Apple support site indicating that bluetooth headsets are failing left and right, but I haven't seen anyone mention hard crashes.

    Read the article

  • How to manage unprivileged administration of system services using Debian?

    - by ypnos
    At our lab, we have several services handled by different phd students (like myself). Fluctuation is high and people do the job next to their research duties. Until now, services were running on different machines, with different OS setups that can result in administration hell quickly. We want to consolidate our service setup. Our main idea is that the guys responsible for the services should not meddle with the underlying system anymore. Apart from core systems like NFS and kerberos, a typical service is able to run as non-root already. I'm talking about apache, mysql, subversion, mail with openxchange, and so on. Redirecting privileged ports is also no issue (source). What is left is the configuration of the service and its payload. One scenario we envisioned is that every service has its own user and home directory, accessable by the corresponding admins. Backup and fallback of the service is easy, as everything needed for the service to run is found in one place. Are there established ways to create such a setup? Does a mostly unique method exist to make services find their files (other than in system directories) while still using the corresponding debian packages? Are there any catches with our idea that we may have overlooked? Would you maybe claim that virtualization is the answer to our problem? (In our POV, it wouldn't help us keeping system setup strictly separated from service setup.) Thank you for any advice!

    Read the article

  • MySQL ADO.NET Connector & MSSQL Integration Services

    - by user1114330
    Here I am, day three... attempting to sync a data view on a Windows Vista box (64 bit) running MSSQL 2012 and Visual Studio 2010. Sanity is slipping and hunger for progress fills my attention. I went through hell trying to get the MySQL ODBC drivers to get the job but to no avail...everyone seems to be lost and all the threads I can find are solutions that do not work for me. The problem: System DSN's not being seen by SSIS. SSIS DSN Not Showing as ODBC Data Source I make the decision to try out the ADO.NET connector...and to my surprise it is actually in the selection list in data sources in SSIS. So I take off running to create a Data Flow Task, create an ADO.NET Source (a local MSSQL DB)...all is good as usual. Then I move swiftly to creating a ADO.NET Destination, enter my credentials...wow, I am selecting a database finally on my linux server! Happy thinking that I finally have figured a way to get the job done. Then I move to mappings...nope, something is wrong...I am getting an error that hurts my eyes: Pipeline component has returned HRESULT error code 0xC0208457 from a a method call. Error at Data Flow Task [ADO NET Destination [81]]: Failed to get properties of external columns. The table name you entered may not exist or you do not have SELECT permission on the table object and an alternative attempt to get column properties through connection has failed. Detailed error messages are" You have an error in your SQL syntax check the manual that corresponds to your MySQL server version for the right syntax to use near "database".tablename" at line 1. The descriptor files on path C:\Program Files (x86)\Microsoft SQL Server\110\DTS\ProviderDescriptors\ does not contain schema information for connection of type MySQL.Data.MySqlClient.MySqlConnection. So it looks like it can't the information and therefore I cannot map the tables properly. Any ideas on this would be ultra helpful...thanks in advance to All!

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • Data loss by randomly unplugging the computer during runtime

    - by Kan
    I'm from Austria and we and the Germans have some sort of bad science-show which runs every day. What I call it would rougly translate to "half-knowledge" if you want so. By the way: It is called "Galileo". So they thought they'd make a computer myth busters video right now, and I couldn't believe what I saw and heard... The strangest thing to me was that they asked: "Does unplugging the computer damage your data?" Then they started up some machine with Vista on it, started copying some files and randomly unplugged the PC cable, the whole thing around 50 times. After their computer continued to start up normally, they just said "nothing can happen, your data or computer can't be damaged". They of course excluded unsaved data in running programs like text editors from this. I asked myself: What the hell are their "computer experts" saying? You can't tell by unplugging the cable 50 times if that can damage your computer. Can unplugging the cable during runtime cause data loss (as said by the moderator of the show)? (I destroyed my windows registry once during a reset)

    Read the article

  • Access an external SSH server through a restrictive proxy [on hold]

    - by Cyrille
    I'm a software developer. I wish to access my computer at home through SSH. For example, I sometime need to access my personal projects source code to check how I handled specific problems. Unfortunately, I currently work under an over-restrictive and anti-productive proxy that waste a hell of a lot of everyone's time (We often have to visit websites from our smartphones or use a web proxy to check very legitimates websites for answers, and don't get me started on other "security" overkill features we have to cope with...). Well, back to the subject, I can access my home computer from my phone (SSH, port 22 and 80 both redirected by router on port 22). It works, but it's quite uncomfortable. From my office computer, this is what I tried so far: export http_proxy=http://user:pass@proxyip:8080 echo "user:pass" > ~/.corkscrew-auth echo "ProxyCommand corkscrew proxyip 8080 %h %p /home/me/.corkscrew-auth" > ~/.ssh/config ssh 82.23.34.56 -l me -p 80 Proxy could not open connnection to 82.23.34.56: Forbidden ssh_exchange_identification: Connection closed by remote host (same without -p 80) Without corkscrew: ssh: connect to host 82.23.34.56 port 80: Connection timed out ssh: connect to host 82.23.34.56 port 22: Connection timed out Any other idea ?

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • AA doesn't work due to monitor?

    - by MikeQ
    I have a monitor (with TV tuner - Philips 221T1SB), native resolution of 1920x1080, but there is an Overscan in Catalyst. It says from left to right '10%' and '0%', if i set it (all the way to the left) on 10%, i get screen with black borders (you know what i mean), but if i set it to 0%, i get a filled monitor. It does make sense, but why the hell is there such an option? I never had any AA issues before, but now, with this monitor, here they go. I can't find any solution, i have only one lead = overscan, or pretty much anything with the CCC settings. Yes, i almost forgot. After a bit of trying to configure the monitor with remote, i found there is a PC channel option, few other channels, and then a HDMI option. I have monitor through HDMI, but why i get black screen when i select the PC channel? That might be another problem. I tried almost everything. Please guys, give me a hand. I don't want jaggies! CPU: Intel I5-750 @4,0GHz RAM: Corsair Vengeance 8GB 1600MHz GPU: AMD Gigabyte HD 7950 Windforce @1100/1400, 1,174V Motherboard: ASUS P7P55D-E EVO

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Fake demostration software for command line

    - by Joe
    I'm looking for some software that would be useful for giving demonstrations. I regularly have to show the effects of scrips ect to classes while talking about their effects, and equaly regularly I have finger trouble and have to rewrite various commands - wasting class time and general energy. I'd like to be able to record a sequence of commands in advance, and then play them back at the speed of my choosing. So I might have a file that containes the commands: echo "hello world!" ls ls -l ls -l | sort I'd like to be able to play these commands back by typing similar ones in. So I'd have a blinking command prompt and if I typed 'echo "hxxx' the command prompt would read home$echo "hell and if I typed any other letters the terminal would fill up with the remainder of the command until I press enter, when it executes the command. The point is that even if I screw up the command when typing it, the command that I'd prepared in advance would be executed. My question is - does similar software exist for giving demonstrations? or even, is this an easy thing to script up...? EDIT - two quick things first of all I'm on osx - but it would be nice to get a general solution for other people who arrive here from google. and second a lot of the comments/answers are concentrating on, in effect, making it fast and easy to enter long commands by means of hotkeys and the like. Actually I'd like it to at least look like I'm typing live - that's why I put in the bit about the one-to-one keymapping, but I don't think I explained that quite as well as I could have...

    Read the article

  • Redirect without changing URL

    - by Coobadivin
    Here's the setup. We have a hardware load balancer with an http virtual cluster. Let's call this virtual cluster example1.com. This virtual cluster load balances between two squid reverse proxies which are also on the same physical servers as the web servers. Squid listens on 80 and points to itself as the cache_peer web server which listens on 81. We also have a standalone web server which we will call example2.com. What we are trying to do is create a subdirectory on example1.com called example1.com/example2. This will point to example2.com, but we want our users to stay at example1.com/example2 in their browser. So, it's like a redirect without actually being a redirect. How the hell do I go about doing this? Is this even possible? I'm looking at squid docs in the meantime. example1.com is running a proprietary web server - not Apache :( We can't host example2.com's content in example1.com's file system. These are two very different platforms.

    Read the article

  • Screen recording in Windows 8 makes PC unusable?

    - by Skadier
    OS used: Windows 8 Pro x64 Hi there, I got a weird problem. I tried to record my screen using different screen recording applications like Camtasia7, Hypercam, and some others. So if I hit "record" everything works fine for about 3-4 seconds. Then the PC slows down heavily caused by extremly high IO usage. The PC gets nearly unusable and laggs like hell. I can't switch applications, can't get TaskManager with Ctrl+Alt+Del, or something else without waiting 1-3 Minutes. After about 5-8 Minutes and a bit of luck I am sometimes able to end the process of the recorder and IO calms down very slow(100% IO lasts for about 2 Minutes before becoming normal). I don't know why this happens. Screen recorders that support Windows 8 aren't out afaik but at least Camtasia7 worked with Windows 8 Developer Preview and Windows 8 Release Preview (only had to change record method to .avi). No slow downs or something else. Is there anyone out there who knows this problem too? Is there maybe a solution to this problem?

    Read the article

  • Randomly unusable wireless connection. Oddest issue ever...

    - by Hallucynogenyc
    I have two desktop computers a laptop and a smartphone. All of them connect perfectly to my router (WRT54GL with Tomato firmware), but randomly (once a week maybe) something odd happens: For over an hour or so one of the desktop computers (Windows 7 Pro x64) will just refuse to connect properly to the router. Only that computer and only to that network. I can connect all the other machines to the router perfectly and I can connect properly to other networks with that machine. What I mean by "not being able to connect properly" I mean that the OS will just tell me "not able to connect to.." or that it will connect but then say that it has no Internet access or it connects but will then take minutes to load any website, even the router web interface. I've tried to change from WPA2 to WEP and back to WPA2, I've tried different network adapters (one internal PCI card and one USB external card), removing networks from Windows and adding again... without making any difference. It works for some days but at the end, it ends happening at some time. I just have no clue on what the hell could be going on here. I first thought it was the network adapter, then the router, then Windows... Thanks in advance

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (I)

    - by ccasares
    Adding attachments to a HumanTask is a feature that exists in Oracle HWF (Human Workflow) since 10g. However, in 11g there have been many improvements on this feature and this entry will try to summarize them. Oracle BPM 11g 11.1.1.5.1 (aka PS4 Feature Pack or PS4FP) introduced two great features: Ability to link attachments at a Task scope or at a Process scope: "Task" attachments are only visible within the scope (lifetime) of a task. This means that, initially, any member of the assignment pattern of the Human Task will be able to handle (add, review or remove) attachments. However, once the task is completed, subsequent human tasks will not have access to them. This does not mean those attachments got lost. Once the human task is completed, attachments can be retrieved in order to, i.e., check them in to a Content Server or to inject them to a new and different human task. Aside note: a "re-initiated" human task will inherit comments and attachments, along with history and -optionally- payload. See here for more info. "Process" attachments are visible within the scope of the process. This means that subsequent human tasks in the same process instance will have access to them. Ability to use Oracle WebCenter Content (previously known as "Oracle UCM") as the backend for the attachments instead of using HWF database backend. This feature adds all content server document lifecycle capabilities to HWF attachments (versioning, RBAC, metadata management, etc). As of today, only Oracle WCC is supported. However, Oracle BPM Suite does include a license of Oracle WCC for the solely usage of document management within BPM scope. Here are some code samples that leverage the above features. Retrieving uploaded attachments -Non UCM- Non UCM attachments (default ones or those that have existed from 10g, and are stored "as-is" in HWK database backend) can be retrieved after the completion of the Human Task. Firstly, we need to know whether any attachment has been effectively uploaded to the human task. There are two ways to find it out: Through an XPath function: Checking the execData/attachment[] structure. For example: Once we are sure one ore more attachments were uploaded to the Human Task, we want to get them. In this example, by "get" I mean to get the attachment name and the payload of the file. Aside note: Oracle HWF lets you to upload two kind of [non-UCM] attachments: a desktop document and a Web URL. This example focuses just on the desktop document one. In order to "retrieve" an uploaded Web URL, you can get it directly from the execData/attachment[] structure. Attachment content (payload) is retrieved through the getTaskAttachmentContents() XPath function: This example shows how to retrieve as many attachments as those had been uploaded to the Human Task and write them to the server using the File Adapter service. The sample process excerpt is as follows:  A dummy UserTask using "HumanTask1" Human Task followed by a Embedded Subprocess that will retrieve the attachments (we're assuming at least one attachment is uploaded): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail: We've defined an XSD structure that will hold the attachments (both name and payload): Then, we can create a BusinessObject based on such element (attachmentCollection) and create a variable (named attachmentBPM) of such BusinessObject type. We will also need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a variable of type TaskExecutionData... ...and copy the HumanTask output execData to it: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentBPM variable with the name of each attachment and setting an empty value to the payload: Please note that we're using the XSLT for-each node to create as many target structures as necessary. Also note that we're setting an Empty text to the payload variable. The reason for this is to make sure the <payload></payload> tag gets created. This is needed when we map the payload to the XML variable later. Aside note: We are assuming that we're retrieving non-UCM attachments. However in real life you might want to check the type of attachment you're handling. The execData/attachment[]/storageType contains the values "UCM" for UCM type attachments, "TASK" for non-UCM ones or "URL" for Web URL ones. Those values are part of the "Ext.Com.Oracle.Xmlns.Bpel.Workflow.Task.StorageTypeEnum" enumeration. Once we have fed the attachmentsBPM structure and so it now contains the name of each of the attachments, it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsBPM/attachment[] element: In every iteration we will use a Script activity to map the corresponding payload element with the result of the XPath function getTaskAttachmentContents(). Please, note how the target array element is indexed with the loopCounter predefined variable, so that we make sure we're feeding the right element during the array iteration:  The XPath function used looks as follows: hwf:getTaskAttachmentContents(bpmn:getDataObject('UserTask1LocalExecData')/ns1:systemAttributes/ns1:taskId, bpmn:getDataObject('attachmentsBPM')/ns:attachment[bpmn:getActivityInstanceAttribute('SUBPROCESS3067107484296', 'loopCounter')]/ns:fileName)  where the input parameters are: taskId of the just completed Human Task attachment name we're retrieving the payload from array index (loopCounter predefined variable)  Aside note: The reason whereby we're iterating the execData/attachment[] structure through embedded subprocess and not, i.e., using XSLT and for-each nodes, is mostly because the getTaskAttachmentContents() XPath function is currently not available in XSLT mappings. So all this example might be considered as a workaround until this gets fixed/enhanced in future releases. Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsBPM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsBPM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target):  Second, we must set the target filename using the Service Properties dialog box:  Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Handling UCM attachments will be part of a different and upcoming blog entry. Once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • Why does Adding Core Animation slow down drawing of my NSTableView?

    - by Will Goring
    I have a simple App that displays a list of items using NSTableView. There are usually about 15-25 items in the list, and the table has 10 columns, all but one of which are text (the other's an icon.) There are simple data transformers on a couple of the columns. So nothing taxing; you'd expect it to run just fine. And as it stands, it is; the app is responsive and everything draws pretty much instantly. Scrolling, changing column sizes, resizing the window; whatever you do the redraws keep up with the mouse-pointer. Tonight, I thought I'd try adding a little visual flair to the app with Core Animation (which I've never used before,) but I've found that if anything above the NSTableView in the hierarchy is set 'Wants Core Animation Layer,' then the NSTableView redraws go to hell. Scrolling is usually fine, but resizing columns or the window causes it to pick up an effect like texture tearing, where some of the rows have the new size, and some have the old one, and everything flickers horribly. It looks terrible. Basically it looks like what you'd expect if rendering the individual rows or columns was taking a long time. I've put it through Shark and, sure enough, it looks like it's spending most of its time drawing the text in the cells; what I don't understand is why that should take longer when there's a Core Animation layer involved than when there isn't - and quite noticeably longer at that. Has anyone got any ideas? Is there any Core Animation initialisation or config I've missed or something (I've literally just ticked the "Wants Core Animation Layer" box in IB)? Cheers, Will

    Read the article

  • C# Gui setting control.Enabled to false fires OnClick event???

    - by Daniel
    For some really weird reason when i set the .Enabled property to false on a simple text box on a small GUI, it fires a radio buttons OnClick event and its causing lots of problems. I have breakpointed the txtBox.Enabled = false; and after stepping over OR into it i jump straight to the OnClick event of the radio button control Here is the call stack as that happened: TestGUI.exe!TestGUI.frmMain.radiobuttonClicked(object sender = {Text = "Download Single Episode" Checked = true}, System.EventArgs e = {System.EventArgs}) Line 67 C# System.Windows.Forms.dll!System.Windows.Forms.Control.OnClick(System.EventArgs e) + 0x70 bytes System.Windows.Forms.dll!System.Windows.Forms.RadioButton.OnClick(System.EventArgs e) + 0x27 bytes System.Windows.Forms.dll!System.Windows.Forms.RadioButton.OnEnter(System.EventArgs e = {System.EventArgs}) + 0x3e bytes System.Windows.Forms.dll!System.Windows.Forms.Control.NotifyEnter() + 0x20 bytes System.Windows.Forms.dll!System.Windows.Forms.ContainerControl.UpdateFocusedControl() + 0x195 bytes System.Windows.Forms.dll!System.Windows.Forms.ContainerControl.AssignActiveControlInternal(System.Windows.Forms.Control value = {Text = "Download Single Episode" Checked = true}) + 0x54 bytes System.Windows.Forms.dll!System.Windows.Forms.ContainerControl.ActivateControlInternal(System.Windows.Forms.Control control, bool originator = false) + 0x76 bytes System.Windows.Forms.dll!System.Windows.Forms.ContainerControl.SetActiveControlInternal(System.Windows.Forms.Control value = {Text = "Download Single Episode" Checked = true}) + 0x73 bytes System.Windows.Forms.dll!System.Windows.Forms.ContainerControl.SetActiveControl(System.Windows.Forms.Control ctl) + 0x33 bytes System.Windows.Forms.dll!System.Windows.Forms.ContainerControl.ActiveControl.set(System.Windows.Forms.Control value) + 0x5 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.Select(bool directed, bool forward) + 0x1b bytes System.Windows.Forms.dll!System.Windows.Forms.Control.SelectNextControl(System.Windows.Forms.Control ctl, bool forward, bool tabStopOnly, bool nested, bool wrap) + 0x7b bytes System.Windows.Forms.dll!System.Windows.Forms.Control.SelectNextControlInternal(System.Windows.Forms.Control ctl, bool forward, bool tabStopOnly, bool nested, bool wrap) + 0x4a bytes System.Windows.Forms.dll!System.Windows.Forms.Control.SelectNextIfFocused() + 0x61 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.Enabled.set(bool value) + 0x42 bytes What the hell? It wouldn't have anything to do with the way i subscribe to the events would it? this.radioBtnMultipleDownload.Click += radiobuttonClicked; this.radioBtnSingleDownload.Click += radiobuttonClicked; this.radioCustomUrl.Click += radiobuttonClicked;

    Read the article

  • WPF DataGrid HeaderTemplate Mysterious Padding

    - by Jake Wharton
    I am placing a single button with an image in the header of a column of a DataGrid. The cell template is also just a simple button with an image. <my:DataGridTemplateColumn> <my:DataGridTemplateColumn.HeaderTemplate> <DataTemplate> <Button ToolTip="Add New Template" Name="AddNewTemplate" Click="AddNewTemplate_Click"> <Image Source="../Resources/add.png"/> </Button> </DataTemplate> </my:DataGridTemplateColumn.HeaderTemplate> <my:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button ToolTip="Edit Template" Name="EditTemplate" Click="EditTemplate_Click" Tag="{Binding}"> <Image Source="../Resources/pencil.png"/> </Button> </DataTemplate> </my:DataGridTemplateColumn.CellTemplate> </my:DataGridTemplateColumn> When rendered, the header has approximately 10-15px of padding on the right side only which causes the cells to obviously render at that width leaving the cell button having empty space on both sides. Being a pixel-perfectionist this annoys the hell out of me. I had initially thought that it was space for the arrows displayed when sorted but I have sorting disabled on both the entire DataGrid and explicitly on the column. Here's a image: I assume this is padding form whatever is the parent element of the button. Does anyone know a way to eliminate it?

    Read the article

  • Threading 101: What is a Dispatcher?

    - by Water Cooler v2
    Once upon a time, I remembered this stuff by heart. Over time, my understanding has diluted and I mean to refresh it. As I recall, any so called single threaded application has two threads: a) the primary thread that has a pointer to the main or DllMain entry points; and b) For applications that have some UI, a UI thread, a.k.a the secondary thread, on which the WndProc runs, i.e. the thread that executes the WndProc that recieves messages that Windows posts to it. In short, the thread that executes the Windows message loop. For UI apps, the primary thread is in a blocking state waiting for messages from Windows. When it recieves them, it queues them up and dispatches them to the message loop (WndProc) and the UI thread gets kick started. As per my understanding, the primary thread, which is in a blocking state, is this: C++ while(getmessage(/* args &msg, etc. */)) { translatemessage(&msg, 0, 0); dispatchmessage(&msg, 0, 0); } C# or VB.NET WinForms apps: Application.Run( new System.Windows.Forms() ); Is this what they call the Dispatcher? My questions are: a) Is my above understanding correct? b) What in the name of hell is the Dispatcher? c) Point me to a resource where I can get a better understanding of threads from a Windows/Win32 perspective and then tie it up with high level languages like C#. Petzold is sparing in his discussion on the subject in his epic work. Although I believe I have it somewhat right, a confirmation will be relieving.

    Read the article

  • Android Live Wallpaper: waitForCondition(ReallocateCondition)

    - by jstatz
    I've been developing a live wallpaper using GLWallpaperService, and have gotten good results overall. It runs rock-solid in the emulator and looks good. I've dealt with OpenGL many times before so have a solid command of how to do things... unfortunately I'm having a hell of a time getting this to actually be stable on the actual hardware. The basic symption occurs when you slide the physical keyboard on a Motorola Droid in and out a few times. This causes the wallpaper to get destroyed/recreated several times in quick succession -- which would be fine, as I have my assets clearing in onDestroy and reloading in onSurfaceChanged. The problem is after a few iterations of this, (four or five, maybe) the calls to onSurfaceChanged completely stop, and i get an endless string of this printed to the log: 04-02 00:53:18.088: WARN/SharedBufferStack(1032): waitForCondition(ReallocateCondition) timed out (identity=337, status=0). CPU may be pegged. trying again. Is there something I should be implementing here aside from the Android-typical onSurfaceCreated/onSurfaceChanged/onSurfaceDestroyed triumvirate? Browsing through the WallpaperService and WallpaperRenderer classes doesn't pop up anything obvious to me.

    Read the article

  • AutoMapper a viable alternative to two way databinding using a FormView?

    - by tbone
    I've started using the FormView control to enable two way databinding in asp.net webforms. I liked that it saved me the trouble of writing loadForm and unloadForm routines on every page. So it seemed to work nicely at the start when I was just using textboxes everywhere....but when it came time to start converting some to DropDownLists, all hell broke lose. For example, see: http://stackoverflow.com/questions/2435185/not-possible-to-load-dropdownlist-on-formview-from-code-behind ....and I had many additional problems after that. So I happened upon an article on AutoMapper, which I know very little about yet, but from the sounds of it, this might be a viable alternative to two-way databinding a form to an domain entity object? From what I understand, AutoMapper basically operates on naming convention, so, it will look for matched names properties(?) on the source and destination objects. So, basically, I have all my domain entities (ie: Person) with properties (FirstName, LastName, Address, etc)....what I would like to be able to do is declare my asp controls with those exact same names, and have automapper do the loading and unloading. One obvious caveat is that AutoMapper would have to know the proper property name for each control type, ie: Person.FirstName -- form.FirstName*.Text* Person.Country -- form.Country.SelectedValue Person.IsVerified -- form.IsVerified.Checked ....so it would have to have the smarts to find the control on the form, determine its type, and then load/unload between the domain object and the webform control into the proper property of the control. So if this worked, a person could just get rid of the cursed FormView control entirely, and it would be just one line of code each for binding and unbinding a webform. Possible?

    Read the article

  • UINavigationController crash because of pushing and poping UIViewControllers

    - by Wayne Lo
    My question is related to my discovery of a reason for UINavigationController to crash. So I will tell you about the discovery first. Please bare with me. The issue: I have a UINavigationController as as subview of UIWindow, a rootViewController class and a custom MyViewController class. The following steps will get a Exc_Bad_Access, 100% reproducible.: [myNaviationController pushViewController:myViewController_1stInstance animated:YES]; [myNaviationController pushViewController:myViewController_2ndInstance animated:YES]; Hit the left back tapBarItem twice (pop out two of the myViewController instances) to show the rootViewController. After a painful 1/2 day of try and error, I finally figure out the answer but also raise a question. The Solutio: I declared many objects in the .m file as a lazy way of declaring private variables to avoid cluttering the .h file. For instance, #impoart "MyViewController.h" NSMutableString*variable1; @implement ... -(id)init { ... varialbe1=[[NSMutableString alloc] init]; ... } -(void)dealloc { [variable1 release]; } For some reasons, the iphone OS may loose track of these "lazy private" variables memory allocation when myViewController_1stInstance's view is unloaded (but still in the navigation controller's stacks) after loading the view of myViewController_2ndInstance. The first time to tap the back tapBarItem is ok since myViewController_2ndInstance'view is still loaded. But the 2nd tap on the back tapBarItem gave me hell because it tried to dealloc the 2nd instance. Executing [variable release] resulted in Exc_Bad_Access because it pointed randomly (loose pointer). To fix this problem is simple, declare variable1 as a @private in the .h file. Here is my Question: I have been using the "lazy private" variables for quite some time without any issues until they are involved in UINavigationController. Is this a bug in iPhone OS? Or there is a fundamental misunderstanding on my part about Objective C? Please help.

    Read the article

  • Custom HTTPHandler causing caching or session issues?

    - by Jan de Jager
    So i have a custom CMS running under .Net 3.5 written entirely in c#. The engine is optimized to render for mobile devices, but also server to normal web browsers. It also supports cookieless sessions. Great... I've chosen not to cache anything (including browser data) in order to control the rendering completely from data. This has been all good until lately. The engine implements a basic login function that simply logs the user state within a session object. The behavior is rather strange. User will click through the site no problem. Then login. The login will either go through successfully or just redisplay the login screen, suggesting a cached page being returned or redisplayed... If the login is successful the concurrent page hits will switch arbitrarily between logged in and logged out state... Also suggesting either the session state is not accessible or a cached page being returned. I have debugged the hell out of the thing.... including using fiddler and the like. When debugging the behavior disappears. Huh? One of the sites running on the engine is http://www.wiseguy.mobi (sorry customized for South Africa, so you'll probably not be able to get the password Text Message)!

    Read the article

  • How to weed out the bad programmers from the competent ones in the interview process

    - by thaBadDawg
    I am getting ready to add another developer to my team and I want to try and fix the mistakes I made in my last hiring cycle. I like to think of myself as a competent programmer (I can be given a project, I can deliver on that project and the deliverable work with very few if any bugs) and so I ask questions that I would ask myself in an interview. I've come to the conclusion that my interviewing skills are completely lacking because the last two people I've hired interviewed incredibly well but have been less than ideal at the tasks that they've been given. My CTO (who was completely useless in giving any guidance as to how) suggested I improve on my interviewing skills. The question is this - How does one programmer interview another programmer and get an understanding of the other programmer's abilities? Edit: Though slightly different, the answers provided to this question could be of use to you. That question concerns specific interview questions while yours seems to be more general about interview approaches and not just about the questions themselves. Update: Just for the hell of it I asked two of the guys I worked with if they could do FizzBuzz. 45 minutes and 80 minutes to work it out. And these aren't bottom level guys either.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >