Search Results

Search found 19958 results on 799 pages for 'bit fiddling'.

Page 337/799 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Problem with Mono and .exe file

    - by Vere Nicolson
    I have purchased a piece of software to configure programable radio control transmitters. It says it will run on Linux, see below: Digital Radio runs on: Microsoft Windows 2000/2003/XP Microsoft Windows Vista/Seven/2008, Linux Ubuntu or a distribution with Mono, 32 or 64 bit, also in a virtual machine. Linux requires the Mono package installed, with also the Visual Basic 2005 runtime library. The Linux version is the same executable file of the Windows platform, and can be execute using Mono. You don't need Wine. All the tests have been done on Ubuntu Desktop 10.10 I have tried for weeks to get the drivers for the cable to work in XP or Win7 and I admit defeat. It looks like Ubuntu can run the cable effortlessly but now I can't get the software going. Tried to run in Ubuntu 10.04 with mono, GUI failed and I got the following message in terminal. $ mono ~/Desktop/GigRadioLinux/DigitalRadio/DigitalRadio.exe The entry point method could not be loaded Windows installation requires using a 30 odd character Passkey and a 4.24k text file as a "license" to be entered during running of the exe file. Can someone tell me how I enter the passkey and license into terminal, or is that not my primary problem? I don't understand "entry point method". Tried Wine and that didn't work either. The developer responded to my earlier emails re the cable drivers, but hasn't replied to questions regarding this. If I have left out anything important let me know and I will try to supply more information.

    Read the article

  • Ubuntu 10.04 - unable to install Arduino

    - by Newbie
    Hello! At the moment, I try to install Arduino on my Ubuntu 10.04 (32 Bit) computer. I downloaded the latest release at http://arduino.cc/en/Main/Software, cd'ed to the directory and unziped the package. When I try to run ./arduino , I get following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Base.main(Base.java:112) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(Unknown Source) ... 1 more Here is my java -version output: java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.5) (6b20-1.9.5-0ubuntu1~10.04.1) OpenJDK Server VM (build 19.0-b09, mixed mode) Any suggestions on this? I try to install arduino without the 'arduino' package. I tried to install it with apt-get (sudo apt-get install arduino). When I try to start arduino (using arduino command) will cause following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Preferences.load(Preferences.java:553) at processing.app.Preferences.load(Preferences.java:549) at processing.app.Preferences.init(Preferences.java:142) at processing.app.Base.main(Base.java:188) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(PApplet.java:224) ... 4 more Update: I saw that I installed several versions of jre (sun and open). So I uninstalled the open jre. Now, when calling arduino I get a new error: java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path thrown while loading gnu.io.RXTXCommDriver Exception in thread "main" java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at gnu.io.CommPortIdentifier.<clinit>(CommPortIdentifier.java:123) at processing.app.Editor.populateSerialMenu(Editor.java:965) at processing.app.Editor.buildToolsMenu(Editor.java:717) at processing.app.Editor.buildMenuBar(Editor.java:502) at processing.app.Editor.<init>(Editor.java:194) at processing.app.Base.handleOpen(Base.java:698) at processing.app.Base.handleOpen(Base.java:663) at processing.app.Base.handleNew(Base.java:578) at processing.app.Base.<init>(Base.java:318) at processing.app.Base.main(Base.java:207)

    Read the article

  • Is there anyway to get msiexec to echo to stdout instead of logging to a file

    - by mrmrcoleman
    As part of a continuous delivery pipeline I'd like to install an msi on a given machine. msiexec plus psexec does this perfectly, but it seems that msiexec can only log to a file and I need it to log to stdout/stderr. Right now, to get the output back into our CI software I'll have to add a second step to echo the contents of the log, which seems a bit pointless. Has anybody faced this issue before (and overcome it?) Thanks in advance for any help here. Mark

    Read the article

  • How can I limit my data usage over tethering on Windows?

    - by Casebash
    The excess data charges if I go over my tethering data limit are ridiculously. Fennec already stated the question well. Because of this, and on general principle, I'd like to have some tools which permit me to do things like: Monitor the amount of bandwidth that I've used I think I can do this from Sprint too, but on-the-computer is nice too on-the-computer gives me a possibility of breaking it down by application See what sort of programs are using the Internet connection I could use, like, Wireshark, but that's a bit too micro-level to be practical Keep those programs, and the operating system, from doing things like "downloading an operating system update" while on the mobile hotspot Related I want to monitor and limit OS X's data transfer while I'm tethering via my iPhone

    Read the article

  • Correct way to treat iptables init failure?

    - by chris_l
    Hi, I'm initializing my iptables rules via /etc/network/if-pre-up.d/iptables, using iptables-restore. This works fine, but I'm a bit worried about what would happen, if that script failed for some reason (maybe the saved iptables file is corrupt or whatever). In case the script failed, I'd like to: Start up my network interfaces without any iptables rules Start up OpenSSH server But not any other services like web server, ... (and maybe stop running instances) Is there a good canonical way to do that? Going into a lower init stage? - I haven't done that in a long time, and I think a lot about init has changed in recent years (?) - which stage should I drop to, and would the OpenSSH server and my network interfaces still run? Thanks Chris (On Debian Lenny)

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Is there a standard Mac OS X Snow Leopard home folder naming convention?

    - by JGFMK
    I recently re-installed a fresh copy of Snow Leopard on my Mac. I was a bit surprised when the process completed it had changed the name of my home folder to use my name rather than my initials. This messed up all the paths in my Eclipse IDE workspace. I vaguely remember being asked if I use it for home or business use and wondered if something I did there has a bearing on the way the default name is assigned. Does anyone have more information on this? Links too official Apple install etc. Cheers.

    Read the article

  • Can't find instructions how to use Windows 7 drivers on Windows Server 2008 R2

    - by Robert Koritnik
    Windows 7 x64 comes with all sorts of signed drivers so there's high probability that all drivers for your machine will be installed during system setup. On the other hand Windows Server 2008 R2 doesn't. Event though it's practically the same OS when it comes to drivers. I know there's a very good reason for this difference. It's a server product, not a desktop one. But the thing is that many power users and developers use server OS on their workstations which are usually desktop machines (a bit more powerful though) and would benefit from the whole driver spectrum that Windows 7 offers... Question I know I've been reading on the internet about some trick where you first install Windows 7, than do something to get either all Windows 7 drivers or just those installed, and then install Windows Server 2008 R2 and use those drivers of Windows 7. The thing is I can't find these instructions on the internet any more. If anybody knows where they are please provide the link for the rest of us.

    Read the article

  • Trouble downloading a file due to lack of permission

    - by user40495
    After downloading this file: http://dl.dropbox.com/u/315/hon/hon_update.exe before it actually copies to my computer I get this error: File Access Denied <title of box> You need permission to perform this action. You require permission from the computer's administrator to make changes to this file. Within this box are to buttons: Try Again or Cancel I have searched other similar "permission" type questions, but they seem to all refer to existing files or folders. How do I give permission to a file which doesn't exist yet? (since it is in the process of downloading and hasn't finally copied over yet) I am running Windows 7 64 bit. I would be happy to supply any other information to help clarify.

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Isometric displaying two different images in different positions

    - by Canvas
    I'm creating a simple Isometric game using HTML5 and Javascript, but I can't seem to get the display to work, at the moment i have 9 tiles that have X and Y positions and the player has a X and Y position, the players X and Y properties are set to 100, and the tiles are as shown tiles[0] = new Array(3); tiles[1] = new Array(3); tiles[2] = new Array(3); tiles[0][0] = new point2D( 100, 100); tiles[0][1] = new point2D( 160, 100); tiles[0][2] = new point2D( 220, 100); tiles[1][0] = new point2D( 100, 160); tiles[1][1] = new point2D( 160, 160); tiles[1][2] = new point2D( 220, 160); tiles[2][0] = new point2D( 100, 220); tiles[2][1] = new point2D( 160, 220); tiles[2][2] = new point2D( 220, 220); Now I use this method to work out the isometric position function twoDToIso( point ) { var cords = point2D; cords.x = point.x - point.y; cords.y = (point.x + point.y) / 2; return cords; } point2D is function point2D( x, y) { this.x = x; this.y = y; } Now this i'm sure does work out the correct positioning, but here is the output Isometric view I just need to move my player position a tiny bit, but is that the best way to display my player position in the right position? Canvas P.S. the tile width is 120 and height is 60 and the player is width 30 by height 15

    Read the article

  • Windows 2008 Group Policy Setting? - Migration Headache

    - by DevNULL
    I have a small domain of users that I just migrated from a linux domain running open-ldap. Our new servers are running Windows 2008 Standard. I've installed Active Directory and everything is working perfectly... except that the initial user privileges is pretty restrictive and I need to loosen it up a bit. For example once they login to their workstations, they can create new files and folders but can not modify existing files or start. I basically want to open it all up except for software installations. Can someone please help with with this migration headache?

    Read the article

  • Access PC Settings Easily from Your Desktop in Windows 8 and 8.1

    - by Akemi Iwaya
    Accessing your system’s settings in Windows 8 is not exactly the most straight-forward of processes, so if you need to change your settings often, then it can be a bit frustrating. With that in mind, the good folks over at 7 Tutorials have created an awesome shortcut that will take all the hassle out of accessing those settings, and make ‘tweaking’ Windows 8 much easier. After downloading the zip file, extract the exe file and place it in an appropriate folder, then create a shortcut. Once you have the new shortcut set up in the desired location (i.e. desktop or pinned to the taskbar), accessing your system’s settings has never been easier in Windows 8 and 8.1! Special Note: If you are someone who runs files through VirusTotal before using them, be aware that two listings there (Commtouch and Symantec) will flag the file as malware. We had no problems on our system whatsoever and believe the malware flags to be false positives. Download the Desktop Shortcut to PC Settings, for Windows 8 & 8.1 [7 Tutorials]     

    Read the article

  • Acer Aspire ASE700-UQ660A will not respond to power button

    - by Tim R.
    This is something of a continuation of this question. I am now completely unable to boot this computer. The last time I used it, I used hibernation mode. When I needed to use it again, it would not respond at all to the power button, keyboard, or mouse. I tried: Holding down the power button for 15 seconds pressing the power button Unplugging the power cord for 30 seconds, plugging it back in, and trying again Removing the motherboard battery for over a minute and reinstalling it Before removing the motherboard battery, none of the lights on the front of the computer lit up. After reinstalling the battery and plugging the power cord back in, the light behind the power button is constantly illuminated (without even pressing the power button), but there is still no response to the power button, no fans turned on, nothing else that would indicate that it is running. System: Acer Aspire ASE700-UQ660A (Specs should be all factory defaults except:) 4 GB RAM Nvidia GeForce 8600 GT with driver version 197.45 Windows 7 Professional 64 bit

    Read the article

  • Cannot locate packages for Citrix install

    - by Noel Evans
    I'm trying to get a Citrix receiver installed on to Ubuntu 14.04 (64 bit) following Ubuntu's docs. The first line of instructions say to get these required packages: sudo apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 But if I paste in that line, I get this error: Reading state information... Done E: Unable to locate package libmotif4 E: Unable to locate package libxp6 E: Unable to locate package libxpm4 E: Unable to locate package libasound2 My repository settings are below. Is there anything I'm missing in there? Otherwise what do I need to do to install these? $ cat /etc/apt/sources.list deb cdrom:[Ubuntu 14.04 LTS _Trusty Tahr_ - Release amd64 (20140417)]/ precise main restricted deb cdrom:[Ubuntu 14.04 LTS _Trusty Tahr_ - Release amd64 (20140417)]/ trusty main restricted deb-src http://archive.ubuntu.com/ubuntu trusty main restricted #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse #Added by software-properties deb http://security.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty-proposed main universe restricted multiverse deb http://archive.ubuntu.com/ubuntu/ trusty-backports main universe restricted multiverse

    Read the article

  • My software is hosted on a "bad" website. Can I do anything about it?

    - by Abluescarab
    The software I've created is hosted on what you could call a "bad" website. It's hard to explain, so I'll just provide an example. I've made a free password generator. This, along with most of my other FREE software, is available on this website. This is their description of my software: Platform: 7/7 x64/Windows 2K/XP/2003/Vista Size: 61.6 Mb License: Trial File Type: .7z Last Updated: June 4th, 2011, 15:38 UTC Avarage Download Speed: 6226 Kb/s Last Week Downloads: 476 Toatal Downloads: 24908 Not only is the size completely skewed, it is not trial software, it's free software. The thing is that it's not the description I'm worried about--it's the download links. The website is a scam website. They apparently link to "cracks" and "keygens", but not only is that in itself illegal, they actually link to fake download websites that give you viruses and charge your credit card. Just to list things that are wrong with this website: they claim all software is paid software then offer downloads for keygens and cracks; they fake all details about the program and any program reviews and ratings; they and the downloads site they link to are probably run by the same person, so they make money off of these lies. I'm only a teenager with no means to pursue legal action. This means that, unfortunately, I can't do anything that will actually get results. I'd like my software to only be downloaded off my personal website. I have links to four legitimate locations to download my software and that's it. Essentially, is there anything I can do about this? As I said above, I can't pursue legal action, but is there some way I can discourage traffic to that website by blacklisting it or something? Can I make a claim on MY website to only download my software from the links I provide? Or should I just pay no mind? Because, honestly, it's a bit of a ways back in Google results. Thank you ahead of time.

    Read the article

  • Accessing and Updating Data in ASP.NET: Filtering Data Using a CheckBoxList

    Filtering Database Data with Parameters, an earlier installment in this article series, showed how to filter the data returned by ASP.NET's data source controls. In a nutshell, the data source controls can include parameterized queries whose parameter values are defined via parameter controls. For example, the SqlDataSource can include a parameterized SelectCommand, such as: SELECT * FROM Books WHERE Price > @Price. Here, @Price is a parameter; the value for a parameter can be defined declaratively using a parameter control. ASP.NET offers a variety of parameter controls, including ones that use hard-coded values, ones that retrieve values from the querystring, and ones that retrieve values from session, and others. Perhaps the most useful parameter control is the ControlParameter, which retrieves its value from a Web control on the page. Using the ControlParameter we can filter the data returned by the data source control based on the end user's input. While the ControlParameter works well with most types of Web controls, it does not work as expected with the CheckBoxList control. The ControlParameter is designed to retrieve a single property value from the specified Web control, but the CheckBoxList control does not have a property that returns all of the values of its selected items in a form that the CheckBoxList control can use. Moreover, if you are using the selected CheckBoxList items to query a database you'll quickly find that SQL does not offer out of the box functionality for filtering results based on a user-supplied list of filter criteria. The good news is that with a little bit of effort it is possible to filter data based on the end user's selections in a CheckBoxList control. This article starts with a look at how to get SQL to filter data based on a user-supplied, comma-delimited list of values. Next, it shows how to programmatically construct a comma-delimited list that represents the selected CheckBoxList values and pass that list into the SQL query. Finally, we'll explore creating a custom parameter control to handle this logic declaratively. Read on to learn more! Read More >

    Read the article

  • Can't Install Windows 2008 R2 on Lenovo Laptop With SSD

    - by Ben
    I am trying to install Windows Server 2008 R2 on a new Lenovo X201 or T410 laptop. Setup halts with the following pop-up: A required CD/DVD device driver is missing. If you have a driver floppy disk, CD, DVD, or USB flash drive, please insert it now. Note: If the windows installation media is in the CD/DVD drive you can safely remove it for this step. The CD drive is obviously working, as it's booting from CD to get to this point. The only thing I can think is that it is to do with the fact they have SSD disks in - but that's just a guess. (Edit - One extra thing that may or may not be relevant: it's the 64 bit version of Server 2008 R2)

    Read the article

  • F5/BigIP rule to redirect affinity-bound users from INACTIVE pool node to other ACTIVE node

    - by j pimmel
    We have several server nodes set up for the end users of our system and because we don't use any kind of session replication in the app servers, F5 maintains affinity for users with the ACTIVE node the client was first bound to. At times when we want to re-deploy the app, we change the F5 config and take a node out of the ACTIVE pool. Gradually the users filter off and we can deploy, but the process is a bit slow. We can't just dump all the users into a different node because - given the update heavy nature of the user activities - we could cause them to lose changes. That said, there is one URL/endpoint - call it http://site/product/list - which we know, when the client hits it, that we could shove them off the INACTIVE node they had affinity with and onto a different ACTIVE node. We have had a few tries writing an F5 rule along these lines, but haven't had much success so i thought I might ask here, assuming it's possible - I have no reason to think it's not based on what we have found so far.

    Read the article

  • Do i need a dedicated server for load balancing?

    - by Ben
    I'm completely new to the concept of load balancing so i hope this question isn't a "stupid question" because i've been searching around and im having a hard time understanding this. So to my understanding, in order to load balance, i need a separate machine with an ip address i can direct all traffic to. I initially thought i needed to rent 3 dedicated servers, one for load balancing and the other two as backend servers. Would a dedicated server be too much for a load balancer or do hosting companies have special types of computers for that process? Then i read somewhere else that i can install a load balance software in both of the two servers and configure it in a way that doesn't require me to rent another machine/dedicated server for load balancing. So im a bit confuse on how to actually implement a load balancer and whether or not i need a dedicated server for the sole purpose of acting as a load balancing machine. Also, i was recommended to use HAproxy so i'll be heading that direction for load balancing.

    Read the article

  • Kinect Presentation at Chippewa Valley Code Camp

    - by mbcrump
    On November 12th 2011, I gave a presentation at Chippewa Valley Code Camp titled, “Kinecting the Dots with the Kinect SDK”. As promised, here is the Slides / Code / Resources to my talk. (click image to download slides) The Kinect for Windows SDK beta is a starter kit for applications developers that includes APIs, sample code, and drivers. This SDK enables the academic research and enthusiast communities to create rich experiences by using Microsoft Xbox 360 Kinect sensor technology on computers running Windows 7. Resources : Download Kinect for Windows SDK beta 2 – You can either download a 32 or 64 bit SDK depending on your OS. FAQ for Kinect for Windows SDK Beta 2 Kinect for Windows SDK Quickstarts for Windows SDK Beta 2 Information on upgrading Kinect Applications to MS SDK Beta 2. – Brand new post by me on how to upgrade Kinect applications to Beta 2. Getting the Most out of the Kinect SDK by me for the Microsoft MVP Award Program Blog. My “Busy Developers Guide to the Kinect SDK” (still references Beta 1 – but most information is still valid) Helpful toolkits / templates mentioned in the talk. Coding4Fun Kinect Toolkit – Lots of extension methods and controls for WPF and WinForms. KinectContrib – Visual Studio 2010 Templates (not updated for Beta 2 as of 11/14/2011). Fun Projects for learning purposes (all updated to Beta 2): Kinect Mouse Cursor – Use your hands to control things like a mouse created by Brian Peek. Kinect Paint – Basically MS Paint but use your hands! Kinecting the Dots: Adding Buttons to your Kinect Application (not on Beta 2 – but check out the guide by me on how to do this) Thanks for attending! I had a really great time at the event and would like to personally thank everyone for coming out to support the local community.  Thanks for reading. Subscribe to my feed

    Read the article

  • A Closable jQuery Plug-in

    - by Rick Strahl
    In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out. Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout. /* * * Closable * * Makes selected DOM elements closable by making them * invisible when close icon is clicked * * Version 1.01 * @requires jQuery v1.3 or later * * Copyright (c) 2007-2010 Rick Strahl * http://www.west-wind.com/ * * Licensed under the MIT license: * http://www.opensource.org/licenses/mit-license.php Support CSS: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Options: * handle Element to place closebox into (like say a header). Use if main element and closebox container are two different elements. * closeHandler Function called when the close box is clicked. Return true to close the box return false to keep it visible. * cssClass The CSS class to apply to the close box DIV or IMG tag. * imageUrl Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling. * fadeOut Optional provide fadeOut speed. Default no fade out occurs */ (function ($) { $.fn.closable = function (options) { var opt = { handle: null, closeHandler: null, cssClass: "closebox", imageUrl: null, fadeOut: null }; $.extend(opt, options); return this.each(function (i) { var el = $(this); var pos = el.css("position"); if (!pos || pos == "static") el.css("position", "relative"); var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el; var div = opt.imageUrl ? $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") : $("<div>"); div.addClass(opt.cssClass) .click(function (e) { if (opt.closeHandler) if (!opt.closeHandler.call(this, e)) return; if (opt.fadeOut) $(el).fadeOut(opt.fadeOut); else $(el).hide(); }); if (opt.imageUrl) div.css("background-image", "none"); h.append(div); }); } })(jQuery); The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this: you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag: $().ready(function() { $('#ctl00_MainContent_panEditBook') .closable({ handle: $('#divEditBook_Header') }) .draggable({ dragDelay: 100, handle: '#divEditBook_Header' }) .shadow({ opacity: 0.25, offset: 6 }); }) The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file: .closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint', handle: $('#divEditBook_Header')}) so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel. More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this: function loadBooks() { showProgress(); // Clear the content $("#divBookListWrapper").empty(); var filter = $("#" + scriptVars.lstFiltersId).val(); Proxy.GetBooks(filter, function(books) { $(books).each(function(i) { updateBook(this); showProgress(true); }); }, onPageError); } function updateBook(book,highlight) { // try to retrieve the single item in the list by tag attribute id var item = $(".bookitem[tag=" +book.Pk +"]"); // grab and evaluate the template var html = parseTemplate(template, book); var newItem = $(html) .attr("tag", book.Pk.toString()) .click(function() { var pk = $(this).attr("tag"); editBook(this, parseInt(pk)); }) .closable({ closeHandler: function(e) { removeBook(this, e); }, imageUrl: "../../images/remove.gif" }); if (item.length > 0) item.after(newItem).remove(); else newItem.appendTo($("#divBookListWrapper")); if (highlight) { newItem .addClass("pulse") .effect("bounce", { distance: 15, times: 3 }, 400); setTimeout(function() { newItem.removeClass("pulse"); }, 1200); } } Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image. This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window. You can find more information about the .closable class behavior and options here: .closable Documentation Plug-ins make Server Control JavaScript much easier I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior: if (this.Closable && !string.IsNullOrEmpty(DragHandleID) ) { string imageUrl = this.CloseBoxImage; if (imageUrl == "WebResource" ) imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE); StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'"); if (!string.IsNullOrEmpty(this.DragHandleID)) closableOptions.Append(",handle: $('#" + this.DragHandleID + "')"); if (!string.IsNullOrEmpty(this.ClientDialogHandler)) closableOptions.Append(",handler: " + this.ClientDialogHandler); if (this.FadeOnClose) closableOptions.Append(",fadeOut: 'slow'"); startupScript.Append(@" .closable({ " + closableOptions + "})"); } The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least. I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control. Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have. Related Links Download jquery.closable West Wind Web Toolkit jQuery Plug-ins © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery   ASP.NET  JavaScript  

    Read the article

  • Have you changed your coding style recently? It wasn't hard wasn't it?

    - by Ernelli
    I've used to write code in C-like languages using the Allman style, regarding the position of braces. void foo(int bar) { if(bar) { //... } else return; //... } Now the last two years I have been working mostly in JavaScript and when we adopted jslint as part of our QA process, I had to adopt to the Crockford way of doing things. So I had to change the coding style into: function foo(bar) { if (bar) { //... } else { return; } //... } Now apart from comparing a C/C++ example with JavaScript, I must say that my JavaScript-Crockford-coding style now has spread into my C/C++/Java coding when I revise old projects and work on code in those languages that for example has no problem with single line statements or ambiguous newline insertion. I used to consider the later format very awkward, I have never had any problems with adapting my coding style to the one chosen by my predecessors, except for when I was a Junior developer mostly being the solve developer on legacy projects and the first thing I did was to change the indenting style. But now after a couple of months I consider the Allman style a little bit too spacious and feel more comfortable with the K&R-like style. Have you changed your coding style during your career?

    Read the article

  • Windows 7 MSDN version

    - by Larry Branham
    I recently purchased a copy of Windows 7 online. Upon trying to install, it stopped and I was told that it could not find a 64 bit driver for my CD/DVD and I could not finish install. I contacted Microsoft and was told that this was something called an MSDN version according to the product key and that was the reason for the problem. They also said is was illegal to sell it to me. The company I bought it from never returned my calls or emails and luckily I got money back through Amazon. Amazon didn't even want me to send it back. Anyway my question is, is there anyway to load this onto my system and will I have problems or should I just throw it out? Any help would be great. Thanks

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >