Search Results

Search found 1440 results on 58 pages for 'adam machanic'.

Page 54/58 | < Previous Page | 50 51 52 53 54 55 56 57 58  | Next Page >

  • Rename "Event" object in jQuery FullCalendar plug-in

    - by Jeff
    GREAT PLUGIN!!! BUT... choice of word "Event" to mean a "calendar entry" was particularly unfortunate This is a wonderfully well-written plug in, and I've really impressed people here at work with what this thing can do. The documentation is astonishingly thorough and clear. Congratulations to Adam! HOWEVER, this plug-in refers to entries in the calendar as "Events" -- this has caused a lot of confusion in my development team's conversations, because when we use the word "Event" we think of things like onmouseover, click, etc. We would really prefer a term like CalendarEvent or CalendarEntry. I am not all that experienced with jQuery yet, so am wondering if there is a simple way to alias one of those terms to this plug-in's Event/Events object? (I know we could recode the plug-in directly, but our code will then break when we download an update.) Thanks!

    Read the article

  • How to get element order number

    - by martin-masiar
    Hello everyone, how can i get order number of some element by javascript/jquery? <ul> <li>Anton</li> <li class="abc">Victor</li> <li class="abc">Simon</li> <li>Adam</li> <li>Peter</li> <li class="abc">Tom</li> </ul> There is 3xli with abc class. Now I need to get order(sequence) number of Simon li. Thanks in advance

    Read the article

  • eclipse plugin not loading dll due to long path

    - by user113018
    I am building an eclipse plugin (a notes plugin, but its a eclipse plugin in the end). One of the plugins my plugin depends on needs to load a native dll. The problem is, that fails depending on where in the disk such dll is. If it is longer than a certain threshold I get the error below java.lang.UnsatisfiedLinkError: nlsxbe (The filename or extension is too long. ) at java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:952) at java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:921) at java.lang.System.loadLibrary(System.java:452) at lotus.domino.NotesThread.load(Unknown Source) at lotus.domino.NotesThread.checkLoaded(Unknown Source) at lotus.domino.NotesThread.sinitThread(Unknown Source) at com.atempo.adam.lotus.plugin.views.TopicView.createPartControl(TopicView.java:609) I have added the path to Path env var, and also registered the dll to no avail. My env is Ms vista profesional, java1.5, eclipse3.4 (and lotus 8) Anyone out there have a clue? Many thanks in advance.

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • D3.js binding an object to data and appending for each key

    - by frshca
    I'm a D3.js newbie and I'm learning how to play around with data. Let's say I have an object with names as keys, and each key has an array of numbers like this: var userdata = { 'John' : [0, 1, 3, 9, 8, 7], 'Harry': [0, 10, 7, 1, 1, 11], 'Steve': [3, 1, 4, 4, 4, 17], 'Adam' : [4, 77, 2, 13, 11, 13] }; For each user, I would like to append an SVG object and then plot the line with the array of values for that user. So here is my assumption of how that would look based on tutorials, but I know it is incorrect. This is to show my limited knowledge and give better understanding of what I'm doing: First I should create the line var line = d3.svg.line().interpolate('basis'); Then I want to bind the data to my body and append an svg element for each key: d3.select('body') .selectAll('svg') .data(userdata) .enter() .append('svg') .append(line) .x(function(d, i) { return i; }) .y(function(d) { return d[i]; }); So am I close??

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • VBScript + Regular Expressions

    - by Karthik
    Dim sString sString = "John;Mary;Anne;Adam;Bill;Ester" Is there a regex I can use to retrieve the following from the above list: John (; at the end of the name) Anne (; at the beginning and end) Ester (; at the beginning) I am currently using the following regex for each: 1. Joh.* 2. .*An.* 3. .*st.* But, the above retrieves the entire string instead of the values I want. How can I get the correct values? Code: Dim oRegex : Set oRegex = New RegExp oRegex.Global = False oRegex.IgnoreCase = False 'John oRegex.Pattern = "Joh.*" Set oMatch = oRegex.Execute(sString) sName = oMatch(0) The above code retrieves the entire string, instead of only John. Same issue with the others :(

    Read the article

  • setaccesscontrol throws Attempted to perform unauthorized operation

    - by Darqer
    I have ws2008 (i think that the same trouble can take place on windows vista), it is joined to the domain. I run following code: DirectorySecurity ds = Directory.GetAccessControl( "c:\\windows\\ADAM" ); ds.AddAccessRule( new FileSystemAccessRule( "domainName\\user", FileSystemRights.Read | FileSystemRights.ListDirectory, AccessControlType.Allow ) ); Directory.SetAccessControl( configurationDirectory, ds ); I'm logged as domain administrator and I get following error: Attempted to perform unauthorized operation. Some User should have access to *exe available in this directory. What can I do to achieve it?

    Read the article

  • Help me finding dependency list.

    - by Pearl
    I have two table employee table and employee dependency table. Employee tooks like below. insert into E values(1,'Adam') insert into E values(2,'Bob') insert into E values(3,'Candy') insert into E values(4,'Doug') insert into E values(5,'Earl') insert into E values(6,'Fran') Employee dependency table looks like below insert into Ed values(3,'2') insert into Ed values(3,'5') insert into Ed values(2,'1') insert into Ed values(2,'4') insert into Ed values(5,'6') I need to find the dependency list like below Eid Ename Dname 3 Candy Bob,Fran Please help me finding the above.

    Read the article

  • SDP media field format

    - by TacB0sS
    Hey, I would like to create a SDP media field with its attributes, and there are a few things I don't understand. I've skimmed and read the relevant RFC and I understand most of what each field means, but what I don't understand is how do I derive from the Audio/Video Format of the JMF, which parameters of the format compose the rtpmap registry entries I need to use. I see many times the fields m=audio 12548 RTP/AVP 0 8 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:101 telephone-event/8000 a=fmtp:101 0-16 a=silenceSupp:off - - - - a=ptime:20 a=sendrecv these are received from the pbx server I'm connecting to, what do they mean in the terms of the JMF audio format properties. (I do understand these are standard audio format commonly used in telecommunication) UPDATE: I was more wondering about the format parameter '0 8 101' at the end of m=audio 12548 RTP/AVP 0 8 101 Thanks in advance, Adam Zehavi.

    Read the article

  • T-SQL Tuesday #34: Help! I Need Somebody!

    - by Most Valuable Yak (Rob Volk)
    Welcome everyone to T-SQL Tuesday Episode 34!  When last we tuned in, Mike Fal (b|t) hosted Trick Shots.  These highlighted techniques or tricks that you figured out on your own which helped you understand SQL Server better. This month, I'm asking you to look back this past week, year, century, or hour...to a time when you COULDN'T figure it out.  When you were stuck on a SQL Server problem and you had to seek help. In the beginning... SQL Server has changed a lot since I started with it.  <Cranky Old Guy> Back in my day, Books Online was neither.  There were no blogs. Google was the third-place search site. There were perhaps two or three community forums where you could ask questions.  (Besides the Microsoft newsgroups...which you had to access with Usenet.  And endure the wrath of...Celko.)  Your "training" was reading a book, made from real dead trees, that you bought from your choice of brick-and-mortar bookstore. And except for your local user groups, there were no conferences, seminars, SQL Saturdays, or any online video hookups where you could interact with a person. You'd have to call Microsoft Support...on the phone...a LANDLINE phone.  And none of this "SQL Family" business!</Cranky Old Guy> Even now, with all these excellent resources available, it's still daunting for a beginner to seek help for SQL Server.  The product is roughly 1247.4523 times larger than it was 15 years ago, and it's simply impossible to know everything about it.*  So whether you are a beginner, or a seasoned pro of over a decade's experience, what do you do when you need help on SQL Server? That's so meta... In the spirit of offering help, here are some suggestions for your topic: Tell us about a person or SQL Server community who have been helpful to you.  It can be about a technical problem, or not, e.g. someone who volunteered for your local SQL Saturday.  Sing their praises!  Let the world know who they are! Do you have any tricks for using Books Online?  Do you use the locally installed product, or are you completely online with BOL/MSDN/Technet, and why? If you've been using SQL Server for over 10 years, how has your help-seeking changed? Are you using Twitter, StackOverflow, MSDN Forums, or another resource that didn't exist when you started? What made you switch? Do you spend more time helping others than seeking help? What motivates you to help, and how do you contribute? Structure your post along the lyrics to The Beatles song Help! Audio or video renditions are particularly welcome! Lyrics must include reference to SQL Server terminology or community, and performances must be in your voice or include you playing an instrument. These are just suggestions, you are free to write whatever you like.  Bonus points if you can incorporate ALL of these into a single post.  (Or you can do multiple posts, we're flexible like that.)  Help us help others by showing how others helped you! Legalese, Your Rights, Yada yada... If you would like to participate in T-SQL Tuesday please be sure to follow the rules below: Your blog post must be published between Tuesday, September 11, 2012 00:00:00 GMT and Wednesday, September 12, 2012 00:00:00 GMT. Include the T-SQL Tuesday logo (above) and hyperlink it back to this post. If you don’t see your post in trackbacks, add the link to the comments below. If you are on Twitter please tweet your blog using the #TSQL2sDay hashtag.  I can be contacted there as @sql_r, in case you have questions or problems with comments/trackback.  I'll have a follow-up post listing all the contributions as soon as I can. Thank you all for participating, and special thanks to Adam Machanic (b|t) for all his help and for continuing this series!

    Read the article

  • cc.net dynamic paramaters in publisher block

    - by aseabridge
    I am Using CC.Net to run an .exe file after project build is complete and need to pass the project name, publish date/time and user on the command line as paramaters to the .exe. However I can't get cc.net to recognise these a dynamic properties and replace them with the correct values. Any ideas? <publishers><exec executable="C:\MyApp.exe"></exec><buildArgs>"$[$CCNetProject]" "$[$CCNetBuildDate]" "$[$CCNetBuildTime]" "$[$CCNetUser]"</buildArgs><buildTimeoutSeconds>30</buildTimeoutSeconds></publishers> Thanks in advance for the help. Adam

    Read the article

  • How to write "good" user interface texts?

    - by Roddy
    Many applications are let down by the quality of the 'writing' in their user interfaces: typically, poor spelling, grammar, inconsistent tone, and worse yet, "humour" are the usual offenders. Are there good resources that can help developers to write UI messages that give a professional and positive impression to your customers, even when your code's going to hell in a handcart? Thanks, all — Some great resources here, so I will CW this question. I'm accepting Adam Sill's answer because it's the one that (as a developer of desktop apps) I found most pertinent.

    Read the article

  • Visual Studios Team System 2008 Code Coverage Window Closes

    - by ThoughtCrhyme
    Trying to run the code coverage tool in Visual Studios for a set of unit tests. Adam from Think First, Code Later has had the same problem: I wanted to get the code coverage metrics for the project. Naturally, I fire up the solution in Visual Studio 2008, go to the Test menu, click Edit Test Run Configurations, and click Local Test Run. I then click Code Coverage to turn on code coverage for a given assembly and POOF the Local Test Run Configraution window just disappears. He recommends installing this hotfix to fix the problem, however a) when I run that hotfix I get the message “None of the products that are addressed by this software update are installed on this computer. Click Cancel to exit setup.” and b) there is no Silverlight in our solution. Any other ideas for a fix?

    Read the article

  • WCF - (Custom) binary serialisation.

    - by Barguast
    I want to be able to query my database over the web, and I am wanting to use a WCF service to handle the requests and results. The problem is that due to the amount of data that can potentially be returned from these queries, I'm worried about how these results will be serialised over the network. For example, I can imagine the XML serialisation looking like: <Results> <Person Name="Adam" DateOfBirth="01/02/1985" /> <Person Name="Bob" DateOfBirth="04/07/1986" /> </Results> And the binary serialisation containing types names and other (unnecessary) metadata. Perhaps even the type name for each element in a collection? o_o Ideally, I'd like to perform the serialisation of certain 'DataContract'-s myself so I can make it super-compact. Does anyone know if this is possible, or of any articles which explain how to do custom serialisation with WCF? Thanks in advance

    Read the article

  • Exporting Eclipse project with a reference to native library

    - by TacB0sS
    I have an Eclipse project, that uses JMF, I found out I could skip the JMF installation process and still to use the CaptureDeviceManager of the JMF, and to receive the list of devices if I could point my project to the native lib of the JMF. I've managed to add the native lib to the IDE run/debug, but once I export the application to an external runnable Jar, the application cannot find the native lib. the files are located in c:\JMF*.dll I tried to add the folder path to the environment variable in windows - didn't work. I tried to add them into another Jar and add it to the project - didn't work. I tried to add the files into the project - didn't work. I tried to add the path to the class path - didn't work. I tried to add the path to the library path - didn't work. does someone have any sort of a solution? Thanks in advance, Adam Zehavi.

    Read the article

  • Lua: Why changing value on one variable changes value on an other one too?

    - by user474563
    I think that running this code you will get excactly what I mean. I want to register 5 names to a register(people). I loop 5 times and in each loop I have a variable newPerson which is supposed to save all information about a person and then be added to the people register. In this example only the names of the people are being registered for simplicity. The problem is that in the end all people turn to have the same name: "Petra". I playied a bit with this but can't get a reasonable reason for this behaviour. Help appreciated! local people={} local person={ name="Johan", lastName="Seferidis", class="B" } local names={"Markus", "Eva", "Nikol", "Adam", "Petra"} --people to register for i=1, 5 do --register 5 people local newPerson=person local name=names[i] for field=1, 3 do --for each field(name, lastname, class) if field==1 then newPerson["name"]=name end --register name end people[i]=newPerson end print("First person name: " ..people[1]["name"]) print("Second person name: "..people[2]["name"]) print("Third person name: " ..people[3]["name"])

    Read the article

  • Remove Duplicates from JavaScript Array

    - by kramden88
    This seems like such a simple need but I've spent an inordinate amount of time trying to do this to no avail. I've looked at other questions on SO and I haven't found what I need. I have a very simple JavaScript array such as peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"); that may or may not contain duplicates and I need to simply remove the duplicates and put the unique values in a new array. That's it. I could point to all the codes that I've tried but I think it's useless because they don't work. If anyone has done this and can help me out I'd really appreciate it. JavaScript or jQuery solutions are both acceptable.

    Read the article

  • php / mysql - select id from one table excepting ids which are in second table

    - by John
    hello. for example i have 2 tables: 1 . users: id Name 1 Mike 2 Adam 3 Tom 4 John 5 Andy 6 Ray 2 . visits: userID date 1 ... 3 ... 6 ... i want to make a page which can be visited once in 12 hours, when user visits that page his id is included in database ( visits ), how i can select all users ( from database users) excepting users who visited page in <= 12 hours ( users from database visits )?

    Read the article

  • How do you filter an NSMutable Array that contains core data?

    - by James
    I have an array that it is populated by core data as follows. NSMutableArray *mutableFetchResults = [CoreDataHelper getObjectsFromContext:@"Spot" :@"Name" :YES :managedObjectContext]; It looks like this in the console. (entity: Spot; id: 0x4b7e580 ; data: { CityToProvince = 0x4b7dbd0 ; Description = "Friend"; Email = "[email protected]"; Age = 21; Name = "Adam"; Phone = "+44175240"; }), How can i filter the array to remove anyone who is over a certain age? or use values in the array to make calculations? Please help i have been stuck for ages on this. Code would be gratefully appreciated.

    Read the article

  • How to write "good" user interface text?

    - by Roddy
    Many applications are let down by the quality of the 'writing' in their user interfaces: typically, poor spelling, grammar, inconsistent tone, and worse yet, "humour" are the usual offenders. Are there good resources that can help developers to write UI messages that give a professional and positive impression to your customers, even when your code's going to hell in a handcart? Thanks, all — Some great resources here, so I will CW this question. I'm accepting Adam Sill's answer because it's the one that (as a developer of desktop apps) I found most pertinent.

    Read the article

  • How do I get the current location of an iframe?

    - by studiothat
    I have built a basic data entry application allowing users to browse external content in iframe and enter data quickly from the same page. One of the data variables is the URL. Ideally I would like to be able to load the iframes current url into a textbox with javascript. I realize now that this is not going to happen due to security issues. Has anyone done anything on the server side? or know of any .Net browser in browser controls. The ultimate goal is to just give the user an easy method of extracting the url of the page they are viewing in the iframe It doesn't necessarily HAVE to be an iframe, a browser in the browser would be ideal. Thanks, Adam

    Read the article

  • Multi join query returns to many results and improperly matched

    - by Woot4Moo
    I have the following minimal schema in Oracle: http://sqlfiddle.com/#!4/c1ed0/14 The queries I have run yield too many results and this query: select cat.*, status.*, source.* from cats cat, status status, source source Left OUTER JOIN source source2 on source2.sourceid = 1 Right OUTER JOIN status status2 on status2.isStray =0 order by cat.name will yield incorrect results. What I am expecting is a table that looks like the following however I cannot seem to come up with the correct SQL. NAME AGE LENGTH STATUSID CATSOURCE ISSTRAY SOURCEID CATID Adam 1 25 null null null 1 2 Bill 5 1 null null null null null Charles 7 5 null null null null null Steve 12 15 1 1 1 1 1 In plain English what I am looking for is to return all known cats + their associated cat source + their cat status while retaining null values. The only information I will have is the source that I am curious about. I also only want the cats that have a status of either STRAY or UNKNOWN (null)

    Read the article

  • FtpWebResponse and StreamReader - specifying an offset

    - by AJ
    Hi, I am using the FtpWebRequest / FtpWebResponse objects in C# to download files from a server - so far, so good. I create a StreamReader object from the response stream and use a StreamWriter to create a local file. Now, the file I am reading happens to be in a very simple 'archive' format - there is a small TOC at the start of the file followed by the actual file data. I can therefore read the TOC and get a file offset and size of the data I want to download. My question is: Supposing the offset is 1024. I would use StreamReader.Read(buffer, 1024, length), but will .NET and the FTP protocol actually allow me to skip bytes 0-1023, or does the reader still go through the (relatively) slow process of downloading and discarding the bytes I don't need? This may make the difference between whether I want to use a single archive file, or a TOC file with the data files stored separately. As a bit of a secondary question, would my mileage vary using the Http classes instead of Ftp? Cheers, Adam

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58  | Next Page >