Search Results

Search found 328 results on 14 pages for 'fold'.

Page 13/14 | < Previous Page | 9 10 11 12 13 14  | Next Page >

  • Can I make Axis2 generate a WSDL with 'unwrapped' types?

    - by Bedwyr Humphreys
    I'm trying to consume a hello world AXIS2 SOAP web service using a PHP client. The Java class is written in Netbeans and the AXIS2 aar file is generated using the Netbeans AXIS2 plugin. You've all seen it before but here's the java class: public class SOAPHello { public String sayHello(String username) { return "Hello, "+username; } } The wsdl genereated by AXIS2 seems to wrap all the parameters so that when I consume the service i have to use a crazy PHP script like this: $client = new SoapClient("http://myhost:8080/axis2/services/SOAPHello?wsdl"); $parameters["username"] = "Dave"; $response = $client->sayHello($parameters)->return; echo $response."!"; When all I really want to do is echo $client->sayHello("Dave")."!"; My question is two-fold: why is this happening? and what can I do to stop it? :) Here's are the types, message and porttype sections of the generated wsdl: <wsdl:types> <xs:schema attributeFormDefault="qualified" elementFormDefault="qualified" targetNamespace="http://soap.axis2.myhost.co.uk"> <xs:element name="sayHello"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="username" nillable="true" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="sayHelloResponse"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="return" nillable="true" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> </wsdl:types> <wsdl:message name="sayHelloRequest"> <wsdl:part name="parameters" element="ns:sayHello"/> </wsdl:message> <wsdl:message name="sayHelloResponse"> <wsdl:part name="parameters" element="ns:sayHelloResponse"/> </wsdl:message> <wsdl:portType name="SOAPHelloPortType"> <wsdl:operation name="sayHello"> <wsdl:input message="ns:sayHelloRequest" wsaw:Action="urn:sayHello"/> <wsdl:output message="ns:sayHelloResponse" wsaw:Action="urn:sayHelloResponse"/> </wsdl:operation> </wsdl:portType>

    Read the article

  • How to store date into Mysql database with play framework in scala?

    - by Rahul Kulhari
    I am working with play framework with scala and what am i doing : login page to login into web app sign up page to register into web app after login i want to store all databases values to user what i want to do: when user register for web app then i want to store user values into database with current time and date but my form is giving error. error: List(FormError(dates,error.required,List())),None) controllers/Application.scala object Application extends Controller { val ta:Form[Keyword] = Form( mapping( "id" -> ignored(NotAssigned:Pk[Long]), "word" -> nonEmptyText, "blog" -> nonEmptyText, "cat" -> nonEmptyText, "score"-> of[Long], "summaryId"-> nonEmptyText, "dates" -> date("yyyy-MM-dd HH:mm:ss") )(Keyword.apply)(Keyword.unapply) ) def index = Action { Ok(html.index(ta)); } def newTask= Action { implicit request => ta.bindFromRequest.fold( errors => {println(errors) BadRequest(html.index(errors))}, keywo => { Keyword.create(keywo) Ok(views.html.data(Keyword.all())) } ) } models/keyword.scala case class Keyword(id: Pk[Long],word: String,blog: String,cat: String,score: Long, summaryId: String,dates: Date ) object Keyword { val keyw = { get[Pk[Long]]("keyword.id") ~ get[String]("keyword.word")~ get[String]("keyword.blog")~ get[String]("keyword.cat")~ get[Long]("keyword.score") ~ get[String]("keyword.summaryId")~ get[Date]("keyword.dates") map { case id~blog~cat~word~score~summaryId~dates => Keyword(id,word,blog,cat,score, summaryId,dates) } } def all(): List[Keyword] = DB.withConnection { implicit c => SQL("select * from keyword").as(Keyword.keyw *) } def create(key: Keyword){DB.withConnection{implicit c=> SQL("insert into keyword values({word},{blog}, {cat}, {score},{summaryId},{dates})").on('word-> key.word,'blog->key.blog, 'cat -> key.cat, 'score-> key.score, 'summaryId -> key.summaryId, 'dates->new Date()).executeUpdate } } views/index.scala.html @(taskForm: Form[Keyword]) @import helper._ @main("Todo list") { @form(routes.Application.newTask) { @inputText(taskForm("word")) @inputText(taskForm("blog")) @inputText(taskForm("cat")) @inputText(taskForm("score")) @inputText(taskForm("summaryId")) <input type="submit"> <a href="">Go Back</a> } } please give me some idea to store date into mysql databse and date is not a field of form

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • How to configure emacs by using this file?

    - by Andy Leman
    From http://public.halogen-dg.com/browser/alex-emacs-settings/.emacs?rev=1346 I got: (setq load-path (cons "/home/alex/.emacs.d/" load-path)) (setq load-path (cons "/home/alex/.emacs.d/configs/" load-path)) (defconst emacs-config-dir "~/.emacs.d/configs/" "") (defun load-cfg-files (filelist) (dolist (file filelist) (load (expand-file-name (concat emacs-config-dir file))) (message "Loaded config file:%s" file) )) (load-cfg-files '("cfg_initsplit" "cfg_variables_and_faces" "cfg_keybindings" "cfg_site_gentoo" "cfg_conf-mode" "cfg_mail-mode" "cfg_region_hooks" "cfg_apache-mode" "cfg_crontab-mode" "cfg_gnuserv" "cfg_subversion" "cfg_css-mode" "cfg_php-mode" "cfg_tramp" "cfg_killbuffer" "cfg_color-theme" "cfg_uniquify" "cfg_tabbar" "cfg_python" "cfg_ack" "cfg_scpaste" "cfg_ido-mode" "cfg_javascript" "cfg_ange_ftp" "cfg_font-lock" "cfg_default_face" "cfg_ecb" "cfg_browser" "cfg_orgmode" ; "cfg_gnus" ; "cfg_cyrillic" )) ; enable disabled advanced features (put 'downcase-region 'disabled nil) (put 'scroll-left 'disabled nil) (put 'upcase-region 'disabled nil) ; narrow cursor ;(setq-default cursor-type 'hbar) (cua-mode) ; highlight current line (global-hl-line-mode 1) ; AV: non-aggressive scrolling (setq scroll-conservatively 100) (setq scroll-preserve-screen-position 't) (setq scroll-margin 0) (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(ange-ftp-passive-host-alist (quote (("redbus2.chalkface.com" . "on") ("zope.halogen-dg.com" . "on") ("85.119.217.50" . "on")))) '(blink-cursor-mode nil) '(browse-url-browser-function (quote browse-url-firefox)) '(browse-url-new-window-flag t) '(buffers-menu-max-size 30) '(buffers-menu-show-directories t) '(buffers-menu-show-status nil) '(case-fold-search t) '(column-number-mode t) '(cua-enable-cua-keys nil) '(user-mail-address "[email protected]") '(cua-mode t nil (cua-base)) '(current-language-environment "UTF-8") '(file-name-shadow-mode t) '(fill-column 79) '(grep-command "grep --color=never -nHr -e * | grep -v .svn --color=never") '(grep-use-null-device nil) '(inhibit-startup-screen t) '(initial-frame-alist (quote ((width . 80) (height . 40)))) '(initsplit-customizations-alist (quote (("tabbar" "configs/cfg_tabbar.el" t) ("ecb" "configs/cfg_ecb.el" t) ("ange\\-ftp" "configs/cfg_ange_ftp.el" t) ("planner" "configs/cfg_planner.el" t) ("dired" "configs/cfg_dired.el" t) ("font\\-lock" "configs/cfg_font-lock.el" t) ("speedbar" "configs/cfg_ecb.el" t) ("muse" "configs/cfg_muse.el" t) ("tramp" "configs/cfg_tramp.el" t) ("uniquify" "configs/cfg_uniquify.el" t) ("default" "configs/cfg_font-lock.el" t) ("ido" "configs/cfg_ido-mode.el" t) ("org" "configs/cfg_orgmode.el" t) ("gnus" "configs/cfg_gnus.el" t) ("nnmail" "configs/cfg_gnus.el" t)))) '(ispell-program-name "aspell") '(jabber-account-list (quote (("[email protected]")))) '(jabber-nickname "AVK") '(jabber-password nil) '(jabber-server "halogen-dg.com") '(jabber-username "alex") '(remember-data-file "~/Plans/remember.org") '(safe-local-variable-values (quote ((dtml-top-element . "body")))) '(save-place t nil (saveplace)) '(scroll-bar-mode (quote right)) '(semantic-idle-scheduler-idle-time 432000) '(show-paren-mode t) '(svn-status-hide-unmodified t) '(tool-bar-mode nil nil (tool-bar)) '(transient-mark-mode t) '(truncate-lines f) '(woman-use-own-frame nil)) ; ?? ????? ??????? y ??? n? (fset 'yes-or-no-p 'y-or-n-p) (custom-set-faces ;; custom-set-faces was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(compilation-error ((t (:foreground "tomato" :weight bold)))) '(cursor ((t (:background "red1")))) '(custom-variable-tag ((((class color) (background dark)) (:inherit variable-pitch :foreground "DarkOrange" :weight bold)))) '(hl-line ((t (:background "grey24")))) '(isearch ((t (:background "orange" :foreground "black")))) '(message-cited-text ((((class color) (background dark)) (:foreground "SandyBrown")))) '(message-header-name ((((class color) (background dark)) (:foreground "DarkGrey")))) '(message-header-other ((((class color) (background dark)) (:foreground "LightPink2")))) '(message-header-subject ((((class color) (background dark)) (:foreground "yellow2")))) '(message-separator ((((class color) (background dark)) (:foreground "thistle")))) '(region ((t (:background "brown")))) '(tooltip ((((class color)) (:inherit variable-pitch :background "IndianRed1" :foreground "black"))))) The above is a python emacs configure file. Where should I put it to use it? And, are there any other changes I need to make?

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Fixing predicated NSFetchedResultsController/NSFetchRequest performance with SQLite backend?

    - by Jaanus
    I have a series of NSFetchedResultsControllers powering some table views, and their performance on device was abysmal, on the order of seconds. Since it all runs on main thread, it's blocking my app at startup, which is not great. I investigated and turns out the predicate is the problem: NSPredicate *somePredicate = [NSPredicate predicateWithFormat:@"ANY somethings == %@", something]; [fetchRequest setPredicate:somePredicate]; I.e the fetch entity, call it "things", has a many-to-many relation with entity "something". This predicate is a filter that limits the results to only things that have a relation with a particular "something". When I removed the predicate for testing, fetch time (the initial performFetch: call) dropped (for some extreme cases) from 4 seconds to around 100ms or less, which is acceptable. I am troubled by this, though, as it negates a lot of the benefit I was hoping to gain with Core Data and NSFRC, which otherwise seems like a powerful tool. So, my question is, how can I optimize this performance? Am I using the predicate wrong? Should I modify the model/schema somehow? And what other ways there are to fix this? Is this kind of degraded performance to be expected? (There are on the order of hundreds of <1KB objects.) EDIT WITH DETAILS: Here's the code: [fetchRequest setFetchLimit:200]; NSLog(@"before fetch"); BOOL success = [frc performFetch:&error]; if (!success) { NSLog(@"Fetch request error: %@", error); } NSLog(@"after fetch"); Updated logs (previously, I had some application inefficiencies degrading the performance here. These are the updated logs that should be as close to optimal as you can get under my current environment): 2010-02-05 12:45:22.138 Special Ppl[429:207] before fetch 2010-02-05 12:45:22.144 Special Ppl[429:207] CoreData: sql: SELECT DISTINCT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 LEFT OUTER JOIN Z_1THINGS t1 ON t0.Z_PK = t1.Z_2THINGS WHERE t1.Z_1SOMETHINGS = ? ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:45:22.663 Special Ppl[429:207] CoreData: annotation: sql connection fetch time: 0.5094s 2010-02-05 12:45:22.668 Special Ppl[429:207] CoreData: annotation: total fetch execution time: 0.5240s for 198 rows. 2010-02-05 12:45:22.706 Special Ppl[429:207] after fetch If I do the same fetch without predicate (by commenting out the two lines in the beginning of the question): 2010-02-05 12:44:10.398 Special Ppl[414:207] before fetch 2010-02-05 12:44:10.405 Special Ppl[414:207] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:44:10.426 Special Ppl[414:207] CoreData: annotation: sql connection fetch time: 0.0125s 2010-02-05 12:44:10.431 Special Ppl[414:207] CoreData: annotation: total fetch execution time: 0.0262s for 200 rows. 2010-02-05 12:44:10.457 Special Ppl[414:207] after fetch 20-fold difference in times. 500ms is not that great, and there does not seem to be a way to do it in background thread or otherwise optimize that I can think of. (Apart from going to a binary store where this becomes a non-issue, so I might do that. Binary store performance is consistently ~100ms for the above 200-object predicated query.) (I nested another question here previously, which I now moved away).

    Read the article

  • Placing Varibles into an external Sheet

    - by Leslie Peer
    Trying to Build an Online D&d program which stores the character info into Tables my problem is the game works just fine while your playing but as soon as you exit game all varibles are lost which means you have to restart from scratch the next time you log on... So this is a Two Fold Question What is the Best type of External Sheet to save it on... and two How to access sheet for saving and Loading Below are Varibles <SCRIPT> Name1="Tabor Bloomfield"; Name2="Sam Wrightfield"; Name3="Gavin Hartfild"; Name4="Gail Quickfoot"; Name5="Robert Gragorian"; Name6="Peter Shain"; Class1="MagicUser"; Class2="Fighter"; Class3="Fighter"; Class4="Thief"; Class5="Cleric"; Class6="Fighter"; Level1=23; Level2=1; Level3=1; Level4=2; Level5=2; Level6=1; Hpts1=145; Hpts2=14; Hpts3=13; Hpts4=8; Hpts5=12; Hpts6=15; Armor1="Robe of Protection +5"; Armor2="Splinted Armor"; Armor3="Chain Armor"; Armor4="Leather Armor"; Armor5="Chain Armor"; Armor6="Splinted Armor"; Ac1a=5; Ac2a=3; Ac3a=3; Ac4a=4; Ac5a=2; Ac6a=3; Armor1b="Ring of Protection +5"; Armor2b="Small Shield"; Armor3b="Small Shield"; Armor4b="Wooden Shield"; Armor5b="Large Shield"; Armor6b="Small Shield"; Ac1b=5; Ac2b=1; Ac3b=1; Ac4b=1; Ac5b=1; Ac6b=1; Str1=21; Str2=16; Str3=14; Str4=13; Str5=14; Str6=13; Int1=19; Int2=11; Int3=12; Int4=13; Int5=14; Int6=13; Wis1=18; Wis2=12; Wis3=14; Wis4=13; Wis5=14; Wis6=12; Dex1=19; Dex2=14; Dex3=13; Dex4=15; Dex5=14; Dex6=12; Con1=19; Con2=15; Con3=16; Con4=13; Con5=12; Con6=10; Chr1=21; Chr2=14; Chr3=13; Chr4=12; Chr5=14; Chr6=13; </SCRIPT> File name ="gamestats" Path="trellian Webpage/droves E and F/gamestats have tryed html Page,Javascript,Creating a serperate table page and putting the varibles into cells...But at a lost on how to arrive at a solution

    Read the article

  • Can the STREAM and GUPS (single CPU) benchmark use non-local memory in NUMA machine

    - by osgx
    Hello I want to run some tests from HPCC, STREAM and GUPS. They will test memory bandwidth, latency, and throughput (in term of random accesses). Can I start Single CPU test STREAM or Single CPU GUPS on NUMA node with memory interleaving enabled? (Is it allowed by the rules of HPCC - High Performance Computing Challenge?) Usage of non-local memory can increase GUPS results, because it will increase 2- or 4- fold the number of memory banks, available for random accesses. (GUPS typically limited by nonideal memory-subsystem and by slow memory bank opening/closing. With more banks it can do update to one bank, while the other banks are opening/closing.) Thanks. UPDATE: (you may nor reorder the memory accesses that the program makes). But can compiler reorder loops nesting? E.g. hpcc/RandomAccess.c /* Perform updates to main table. The scalar equivalent is: * * u64Int ran; * ran = 1; * for (i=0; i<NUPDATE; i++) { * ran = (ran << 1) ^ (((s64Int) ran < 0) ? POLY : 0); * table[ran & (TableSize-1)] ^= stable[ran >> (64-LSTSIZE)]; * } */ for (j=0; j<128; j++) ran[j] = starts ((NUPDATE/128) * j); for (i=0; i<NUPDATE/128; i++) { /* #pragma ivdep */ for (j=0; j<128; j++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } The main loop here is for (i=0; i<NUPDATE/128; i++) { and the nested loop is for (j=0; j<128; j++) {. Using 'loop interchange' optimization, compiler can convert this code to for (j=0; j<128; j++) { for (i=0; i<NUPDATE/128; i++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } It can be done because this loop nest is perfect loop nest. Is such optimization prohibited by rules of HPCC?

    Read the article

  • Need .Net method to compute a Google Pagerank request checksum.

    - by Steve K
    The company I work for is currently developing a SEO tool which needs to include a domain or url Pagerank. It is possible to retrieve such data directly from Google by sending a request to the url called by the Google ToolBar. On of the parameters send to that url is a checksum of the domain whose pagerank is being requested. I have found multiple .Net methods for calculating that check sum; however, every one randomly returns corrupt values every so often. I can only handle errors to a certain point before my final data set becomes useless. I know that there are countless tools out there, from browser plugins to desktop applications, that can process page rank, so it can't be impossible. My question, then, is two fold: 1) Any anyone heard of the problem I am having? (specifically in .Net) If so, how can it (or has it) be resolved? 2) Is there a better source for retrieving Pagerank data? Below is the Url and checksum code I have been using. "http://toolbarqueries.google.com/search?client=navclient-auto&ie=UTF-8&oe=UTF-8&features=Rank:&q=info:" & strUrl & "ch=" & strCheckSum where: strUrl = the url being queried strCheckSum = CheckHash(GetHash(url)) (see code below) Any help would be greatly appreciated. ''' <summary> ''' Returns a hash-string from the site's URL ''' </summary> ''' <param name="_SiteURL">full URL as indexed by Google</param> ''' <returns>HASH for site as a string</returns> Private Shared Function GetHash(ByVal _SiteURL As String) As String Try Dim _Check1 As Long = StrToNum(_SiteURL, 5381, 33) Dim _Check2 As Long = StrToNum(_SiteURL, 0, 65599) _Check1 >>= 2 _Check1 = ((_Check1 >> 4) And 67108800) Or (_Check1 And 63) _Check1 = ((_Check1 >> 4) And 4193280) Or (_Check1 And 1023) _Check1 = ((_Check1 >> 4) And 245760) Or (_Check1 And 16383) Dim T1 As Long = ((((_Check1 And 960) << 4) Or (_Check1 And 60)) << 2) Or (_Check2 And 3855) Dim T2 As Long = ((((_Check1 And 4294950912) << 4) Or (_Check1 And 15360)) << 10) Or (_Check2 And 252641280) Return Convert.ToString(T1 Or T2) Catch Return "0" End Try End Function ''' <summary> ''' Checks the HASH-string returned and adds check numbers as necessary ''' </summary> ''' <param name="_HashNum">generated HASH-string</param> ''' <returns>modified HASH-string</returns> Private Shared Function CheckHash(ByVal _HashNum As String) As String Try Dim _CheckByte As Long = 0 Dim _Flag As Long = 0 Dim _tempI As Long = Convert.ToInt64(_HashNum) If _tempI < 0 Then _tempI = _tempI * (-1) End If Dim _Hash As String = _tempI.ToString() Dim _Length As Integer = _Hash.Length For x As Integer = _Length - 1 To 0 Step -1 Dim _quick As Char = _Hash(x) Dim _Re As Long = Convert.ToInt64(_quick.ToString()) If 1 = (_Flag Mod 2) Then _Re += _Re _Re = CLng(((_Re \ 10) + (_Re Mod 10))) End If _CheckByte += _Re _Flag += 1 Next _CheckByte = _CheckByte Mod 10 If 0 <> _CheckByte Then _CheckByte = 10 - _CheckByte If 1 = (_Flag Mod 2) Then If 1 = (_CheckByte Mod 2) Then _CheckByte >>= 1 End If End If End If If _Hash.Length = 9 Then _CheckByte += 5 End If Return "7" + _CheckByte.ToString() + _Hash Catch Return "0" End Try End Function ''' <summary> ''' Converts the string (site URL) into numbers for the HASH ''' </summary> ''' <param name="_str">Site URL as passed by GetHash()</param> ''' <param name="_Chk">Necessary passed value</param> ''' <param name="_Magic">Necessary passed value</param> ''' <returns>Long Integer manipulation of string passed</returns> Private Shared Function StrToNum(ByVal _str As String, ByVal _Chk As Long, ByVal _Magic As Long) As Long Try Dim _Int64Unit As Long = Convert.ToInt64(Math.Pow(2, 32)) Dim _StrLen As Integer = _str.Length For x As Integer = 0 To _StrLen - 1 _Chk *= _Magic If _Chk >= _Int64Unit Then _Chk = (_Chk - (_Int64Unit * Convert.ToInt64(_Chk \ _Int64Unit))) _Chk = IIf((_Chk < -2147483648), (_Chk + _Int64Unit), _Chk) End If _Chk += CLng(Asc(_str(x))) Next Catch End Try Return _Chk End Function

    Read the article

  • Sending XML to Servlet from Action Script

    - by John Doe
    I am only getting empty arrays on output. Anyone know what Exactly I'm doing wrong? package myDungeonAccessor; /* * To change this template, choose Tools | Templates * and open the template in the editor. */ import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.io.PrintWriter; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class myDungeonAccessorServlet extends HttpServlet { private myDungeonAccessor dataAccessor; /** * Processes requests for both HTTP <code>GET</code> and <code>POST</code> methods. * @param request servlet request * @param response servlet response * @throws ServletException if a servlet-specific error occurs * @throws IOException if an I/O error occurs */ protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { /* TODO output your page here out.println("<html>"); out.println("<head>"); out.println("<title>Servlet myDungeonAccessorServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet myDungeonAccessorServlet at " + request.getContextPath () + "</h1>"); out.println("</body>"); out.println("</html>"); */ } finally { out.close(); } } // <editor-fold defaultstate="collapsed" desc="HttpServlet methods. Click on the + sign on the left to edit the code."> /** * Handles the HTTP <code>GET</code> method. * @param request servlet request * @param response servlet response * @throws ServletException if a servlet-specific error occurs * @throws IOException if an I/O error occurs */ @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); // PrintWriter out = response.getWriter(); System.out.println("yo mom"); } /** * Handles the HTTP <code>POST</code> method. * @param request servlet request * @param response servlet response * @throws ServletException if a servlet-specific error occurs * @throws IOException if an I/O error occurs */ @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { //System.out.println("heppo"); //dataAccessor = new myDungeonAccessor(); System.out.println("Hello"); try { System.out.println("HEADERS: " + request.getHeaderNames()); ObjectInputStream in = new ObjectInputStream(request.getInputStream()); ObjectOutputStream out = new ObjectOutputStream(response.getOutputStream()); } catch(Exception e) { e.printStackTrace(); } System.out.println("WAZZUP"); byte [] buffer = new byte[4096]; //in.read(buffer); System.out.println("TEST!"); String s = new String(buffer); System.out.println("Update S:" + s); } /** * Returns a short description of the servlet. * @return a String containing servlet description */ @Override public String getServletInfo() { return "Short description"; } }

    Read the article

  • Can i create different observables and different corresponding observers in java?

    - by mithun1538
    Hello everyone, Currently, I have one observable and many observers. What i need is different observables, and depending on the observable, different observers. How do I achieve this? ( For understanding, assume I have different apples - say apple1 apple2... I have observer_1 observing apple1, observer_2 observing apple2, observer_3 observing apple 2 and so on..). I tried creating different objects of the Observable class, but since observers are observing the same class of observable, I don't know how to access a particular instance of the Observable. I have included the following servlet code that contains Observer and Observable classes: public class CustomerServlet extends HttpServlet { public String getNextMessage() { // Create a message sink to wait for a new message from the // message source. return new MessageSink().getNextMessage(source); } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ObjectOutputStream dout = new ObjectOutputStream(response.getOutputStream()); String recMSG = getNextMessage(); dout.writeObject(recMSG); dout.flush(); } public void broadcastMessage(String message) { // Send the message to all the HTTP-connected clients by giving the // message to the message source source.sendMessage(message); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { try { ObjectInputStream din= new ObjectInputStream(request.getInputStream()); String message = (String)din.readObject(); ObjectOutputStream dout = new ObjectOutputStream(response.getOutputStream()); dout.writeObject("1"); dout.flush(); if (message != null) { broadcastMessage(message); } // Set the status code to indicate there will be no response response.setStatus(response.SC_NO_CONTENT); } catch (Exception e) { e.printStackTrace(); } } @Override public String getServletInfo() { return "Short description"; }// </editor-fold> MessageSource source = new MessageSource(); } class MessageSource extends Observable { public void sendMessage(String message) { setChanged(); notifyObservers(message); } } class MessageSource extends Observable { public void sendMessage(String message) { setChanged(); notifyObservers(message); } } class MessageSink implements Observer { String message = null; // set by update() and read by getNextMessage() // Called by the message source when it gets a new message synchronized public void update(Observable o, Object arg) { // Get the new message message = (String)arg; // Wake up our waiting thread notify(); } // Gets the next message sent out from the message source synchronized public String getNextMessage(MessageSource source) { // Tell source we want to be told about new messages source.addObserver(this); // Wait until our update() method receives a message while (message == null) { try { wait(); } catch (Exception e) { System.out.println("Exception has occured! ERR ERR ERR"); } } // Tell source to stop telling us about new messages source.deleteObserver(this); // Now return the message we received // But first set the message instance variable to null // so update() and getNextMessage() can be called again. String messageCopy = message; message = null; return messageCopy; } }

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Monday, April 19, 2010

    CodePlex Daily Summary for Monday, April 19, 2010New Projects8085 Microprocessor simulator: This program allows you to write 8085 programs in assembly and run those programs on your PC. It comes with lots of help, plus you can put breakpo...Additional.NET framework: The Additional.Net framework extends the functionality of the .NET framework for easier application development. It is developed in C#.Astoria Contrib: A contrib project for filling the gaps in WCF Data Services, providing missing functionality or augmenting with T4 templates, helpers, etc.ClipoWeb: ClipoWeb is a web clipboard that allows you to copy text and files between computers. Users access a web page on the source and destination compute...elearning Center: Đây là một ứng dụng web viết hoàn toàn bằng Sliverlight. Ứng dụng này là một dạng elearning với đầy đủ chức năng và có khả năng tương tác tối đa v...Excel VSTO SQL Server Browser: Get Data from SQL Server and put it in Excel directly. The objective is to get more control about what do you need to pull and create automatic pro...Generic Tree Structure: Generic Base Classes that helps you to create complex tree structures without writing it again and again. Simply to use Like "var Node<Folder> fold...LAN Lordz LAN Party System: The LAN Lordz LAN Party System makes it easier for medium and large size events to track their attendance, sponsors, door prizes, tournaments, and...LiteFx: O LiteFx é um framework que ajuda na implementação de DDD (anêmico ou rico) ele foi desenvolvido por Douglas Aguiar (http://twitter.com/DouglasAgui...Managed UI Flow for ASP.NET MVC Framework: If your web application getting more complex, understanding and managing of complex UI flows(pageflow of application) getting harder and harder, If...Meus Exemplos: Meus ExemplosOrchard Blueprint Theme: Orchard BluePrint is a project that provides a WYSIWYG reference implementation of a Orchard theme to help designers get started with theme design....Outlook Social Network Connector - Avatar: Avatar 是一个开源的MS Outlook的插件,豆瓣用户可以在Outlook 2010中使用豆瓣。查看一封邮件中相关的收件人、发件人的用户广播、同城活动以及豆邮。不用上豆瓣也能方便了解好友动态。这个插件使用C#, .NET 4.0 开发。API 请求认证使用OAuth 认证。 (Avat...Quadro Tree: This is Quadri tree library.Sharepoint 2010 Alert Controller: In MOSS 2007 or Sharepoint Server 2010 if you want to see your alerts by list name you should use this tool.SharePoint Web Parts: The goal of this project is to develop a set of web parts for SharePoint.Silverlight Image Cropper: This is a silverlight 4 util that makes it easy to crop out a number images of a specific resolution screen or screens. ie. an easy way to crop ...SilverlightFTP: Silverlight ftp clientsplibex: libraries for sharepoint lists manipulationStardustExtensions: Official Extensions for StardustSwim Team Manager: Swim Team Manager is designed for managing and tracking administrative and performance information for your club, school, or swim team. Swim Tea...ToDoListWpf: A To Do List, I used it to manage my work items. I am sorry for my poor English.Trance Layer: TranceLayer is a fast and flexible logging or diagnostics framework for .Net. It allows you to plug it into an existing or new application with m...Unoficcial NeoFM.hu NowPlaying: A little windows tray program. Shows what's on neofm.hu right now.WabbitStudio Z80 Software Tools: The software suite provides all of the tools you need to create high quality Z80 software in Z80 assembly language, with a focus on TI calculators....WinToolbar: Windows.Toolbar is Silverlight library that implements common widgets that allows us to build a rich toolbar control in our applications, it incor...XP-More: XP-More is a tool that helps manage Windows 7 Virtual Machines (XP Mode and any other). Specifically, it makes duplication of VMs a no brainer - no...Yodelay .NET Framework Extensions: The Yodelay .NET Framework Extensions project provides a library of components that make many kinds of programming tasks simpler. These include bas...New ReleasesClipoWeb: ClipoWeb 1.0: First Beta release of the ClipoWeb web applicationDDDSample.Net: 0.8: This release contains all four versions of DDDSample.Net available in previous, 0.7 and a brand new one: Layered Model version. Layered Model demon...DotNetNuke Blueprint: 00.00.02: Added to this version CSS Reset Skin version including Grids This version will soon be updated with corresponding HTML version and DNN templateEsferatec.Text.RegularExpressions: 3.5.1003.1001: first stable release of the class; the assembly file is ready to use, the documentation is complete;Excel VSTO SQL Server Browser: Sample Only: Sample without Ribbon UI, if you close the TaskPane you will no longer able to open it without restart ExcelFolder Bookmarks: Folder Bookmarks 1.5.5: This is the latest version of Folder Bookmarks (1.5.5), with the new Archive Manager and Archive Viewer. It has an installer - it will create a dir...Gardens Point LEX: Gardens Point LEX, Version 1.1.3: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.3 corre...Gardens Point Parser Generator: Gardens Point Parser Generator V1.4.0: The distribution is a zip archive which contains the binary executables, documentation, source code and examples. ChangesVersion 1.4.0 of GPPG has...HKGolden Express: HKGoldenExpress (Build 201004181455): New features: Added rating of each topic. (Note: This feature is availabe since Build 201004172120) Bug fix: Handle invalid XML character in XML...Home Access Plus+: v4.0.0.0 Beta: v4.0.0.0 Beta Change Log: Moved to using .net 4.0 New Silverlight Uploader Various .net 4 fixes and tweaks File Changes: All fixes have changedHTML Ruby: 6.21.6: Reduced performance hit on pages with heavy DOM manipulations Fixed issue where empty tags caused it to apply invalid spacing values Stop spaci...LINQ to VFP: LinqToVfp (v1.0.17.2): Modified to allow using RecNo as a primary key. This build requires IQToolkit v0.17b.Managed UI Flow for ASP.NET MVC Framework: Preview 1: The source available on this site, does not reflect the final state of the project, it is a preview of what will be shipping in the framework in th...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (2): Super minor update to accommodate the new Blend 4 RC. Only changes: The path to the Blend 4 templates changed to be "My Documents\Expression\Blend...N2 CMS: 2.0 rc: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes (1.5 -> 2.0 release candid...OpenGL ES 2.0 Compact Framework Wrapper: Sample application CAB with texturing: This took some time as it was pretty hard to get the texture loaded and setup so that it would bind to the sampler2D in the fragment shader. Featu...Orchard Blueprint Theme: 00.00.01: This is the first release of this project, still in a very alpha version. Very soon this release is to be updated with the HTML version of the them...RoughJs: RoughJsSL: This is Silverlight library's CompilerSharepoint 2010 Alert Controller: Sharepoint 2010 Alert Controller: After you download WSP file you can get help from Home PageSharePoint LogViewer: SharePoint LogViewer 2.5: Minimize log viewer to tray Get popup notification of SharePoint log events from tray Redirect log entries to event log Send email notifications on...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.1: This is a minor update which includes the following changes: Code consolidation across the whole project Additional site data captured. See solut...Stardust: Stardust 1.0: First stable version of Stardust (Build 172)StardustExtensions: Facebook Extension: Extension for stardust to upload and post images on Facebook.StardustExtensions: Facebook Extension (Source): The source code of an extension for Stardust used to post images on facebook.StardustExtensions: WPF Example: This is an example extension. Uses WPF to create a Window and say "Hello World!" Is a perfect download if want to start writing Stardust ExtensionsStardustExtensions: WPF Example Source: This is the source code of an extension that creates a Window using WPF & displays a simple text. Is great as an example of creating Stardust Exten...TFTP Server: TFTP Server 1.1 Beta installer: New MSI based installer Installs a TFTP service Supports multiple servers on different endpoints, with every server pointing to its own root di...TiledLib: TiledLib 1.1: This download is for prebuilt DLLs and a demo project. For the full source code, use the Source Code tab. Changes: Bug fixes in a few methods Ad...Trance Layer: TranceLayer Digger: Digger version is a beta. It is intended to be used as a demonstration of muscles while lacking a set of features that are in the docs. The set of ...uManage - Active Directory Self-Service Portal: uManage v1.2 (.NET 4.0 RTM): New Releasev1.2 Adds the Administrative Portal as well as the requirement of a MSSQL database (2005+). The Setup Wizard has also been updated to i...Unoficcial NeoFM.hu NowPlaying: NeoNotifier: First release. Aplha, but usable.VidCoder: 0.2.1: Changes: Added 2-pass encoding Fixed x264 options getting mangled during p-invoke Fixed intermittent crash with logging window open due to thre...WCF RIA Services Contrib: WCF RIA Services Contrib RC2 Release: This version is for the WCF RIA Services RC2 (SL4 RTM) release. The ApplyState has been modifed in this version to disable validation during proces...WiiCIS.NET: WiiCIS.NET v0.2: Changes... - Removal of WiimoteManager, connection must be done manually - Accelerometer orientation was originally in degrees, is now in radians -...WinToolbar: WinToolbar Source code plus sample: This zip file contains the current version source code and libraries plus a testrunner (sample app).XP-More: 0.9 (Beta): Most of the functionality is in place, final polishing will be done soon.Most Popular ProjectsFacebook Developer ToolkitWSPBuilder (SharePoint WSP tool)QuickGraph, Graph Data Structures And Algorithms for .NetPerformance Analysis of Logs (PAL) Toolpatterns & practices: Team Development with Visual Studio Team Foundation ServerTFS Integration Platformpatterns & practices: Performance Testing Guidance for Web Applicationspatterns & practices: Enterprise Library ContribJSON ViewerManaged Wifi APIMost Active ProjectsRawrpatterns & practices – Enterprise LibraryIndustrial DashboardIonics Isapi Rewrite FilterFarseer Physics EngineMVVM Light ToolkitjQuery Library for SharePoint Web ServicesN2 CMSCaliburn: An Application Framework for WPF and SilverlightBlogEngine.NET

    Read the article

  • CodePlex Daily Summary for Friday, April 16, 2010

    CodePlex Daily Summary for Friday, April 16, 2010New Projects[C#] UML Touch: This project is my master thesis project for Msc Software Engineering at Oxford Brookes univesity. The goal of this project is to develop a UML Edi...3D Chart Reports, using ASP.NET 2.0: Project focuses on the visibility of internal supply chain management process. The objective is materialized, using MSChart and hence provides asse...Active Directory Health: ActiveDirectoryHealth (ADHealth) Aim: To create a suite of tools to allow an Active Directory administrator monitor and identify problems, particu...Art4Desktop: Aplicação desktop simplesAuto Increment field for any entity in MS CRM: Auto Increment field for any entity in MS CRM will add an attribute in any entity which will be auto increment type.Convection Game Engine (Basic Edition): A basic 2D game engine written in C# using XNA 3.1CSharp Intellisense: VS 2010 IntelliSense plug-in Provides custom IntelliSense for the CSharp Editor that is capable to filter out events, preoperties or methods. ...EP: Generic platform for enterprise applications developed on .NETFile Change Checker: A simple windows form application that checks and copies all modified files given a specified date. It aims to help developers that use Visual Stu...Folder: Folder is a puzzle platformer game, which provides you a whole new experience. You can fold objects in game play; a wall becomes a road, a high sti...Industrial Dashboard: WCF service that allows executing SQL Server stored procedures straight from javascript code, enabling sending and receiving structured data withou...kafrilion: Open Source Platform GameLogikBug.Injection: LogikBug.Injection is an IOC container and is very light weight. It is very simple to use and yet very roboust, there are many options and ways to ...MongoMvc: NoSql with MongoDB, NoRM and ASP.NET MVC. For more details check out the blog post http://weblogs.asp.net/shijuvarghese/archive/2010/04/16/nosql-wi...MOSSDAL: MOSS Data Access Layer for data from the Sharepoint Lists Service: MOSSDAL is a lightweight framework for working with Sharepoint MOSS List data using the List web Service. It can be used with silverlight or regul...Should: This is a set of test framework agnostic extensions assetion extensions. This project was born because test runners Should be independent of the...Software Localization Tool: summaryTIMETABLEASY: planning managerTR9: implementions of t9 technologies from mobile device to personal computer and laptopTrp net tools: net toolTwep: Twep is a JavaScript lib.Using PowerPivot to analyze MS Dynamics NAV: Project show how to prepare MS Dynamics NAV data for analyzing in PowerPivot for Excel. Project include Data Warehouse demo database, sql procedure...vmware virtual mashines management system for education: System for management many classes with PC and VMware virtual mashines (bad english,sorry)WebSite: Mr. John's projectsWebtrends DX and DC Services Libraries: This project is setup to provide assemblies to simplify access to the REST based DX and DC Web Services that Webtrends Provides. For access you ne...YetAnotherFileRenamer: Searches a directory and/or sub-directories for files matching a certain extension and copies and renames them to the same folder. The intent is a...New Releases3D Chart Reports, using ASP.NET 2.0: Beta iscm1.0.0.1: Its first release, might be a tolerant to bugs.A Guide to Parallel Programming: Drop 2 - Guide Chapters 2 and 5, accompanying code: This is Drop 2 with just the Guide Chapters 2 and 5 and the accompanying code samples. This drop requires Visual Studio 2010 Beta 2 or later in ord...AnimeJ: AnimeJ 1.1: This new release features reversible animations. It is fully backward compatible, and by simply specifying an additional parameter to Run() you can...AutoPoco: AutoPoco 0.4: A load of additions to the convention system Inheritance precedence added And a pile of extra data sourcesCSharp Intellisense: V1.5: CSharpIntellisensePresenter(Free) Created by: Bnaya Eshet (Bnaya Eshet (credit to Karl Shifflett)) Last Updated: Tuesday, April 13, 2010 Version...DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 4: Alpha 4 upgrades all story-telling with one very basic, simple, 'story' data pattern, schema, and stylesheet. Basic, simple, stories and eBook pa...ESPEHA: Espeha 4.3: Deletion of categories and tasks via F8 Saving on CtrlS, CtrlShiftS Navigating to search on CtrlF Hiding on CtrlQ, CtrlX, CtrlH, Q, X, H + Drag ...Folder Bookmarks: Folder Bookmarks 1.4.4: This is the latest version of Folder Bookmarks (1.4.4), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...HouseFly controls: HouseFly controls alpha 0.9.1: Version 0.9.1 alpha release of HouseFly controlsIndustrial Dashboard: 2.0 Beta: 2.0 Beta includes : IndustrialDashboard Framework Sample widgets : TabularReport, DropDownPicker, DatePicker, ScopePicker Sample html pageskdar: KDAR 0.0.20: KDAR - Kernel Debugger Anti Rootkit - signature's bases updated - many bugs fixedLogikBug.Injection: Initial Release: This project is dependent upon Microsoft.Practices.ServiceLocation.IServiceLocator and must be referenced when referencing LogikBug.Injection.Mesopotamia Experiment: Mesopotamia 1.2.72: New Features - Select organims to load in sim or in hw mode Fixes - fixed robotics engine not shutting down properly - selects new robotics port o...MobySharp: MobySharp 1.1: Fixed GetComments Added the new Likes featureMongoMvc: NoSQL with MongoDB, NoRM and ASP.NET MVC: A demo on NoSQL with MongoDB, NoRM and ASP.NET MVC. To run the demo application do the ollowing actions. 1. Create a directory call C:\data\...NetMassDownloader: Release 1.6.0.0: Mass Downloader For .Net Framework which allows you do download .Net Framework source code all at once.Offline debugging for VS2005 , VS2008 , Code...Nito.KitchenSink: Version 4: Features added in this release: PDBs are source-indexed to CodePlex, so it should be possible to step through the code (with on-demand downloads) w...Nito.KitchenSink: Version 5: The only feature for this release is support of the .NET 4.0 platform. Dependencies Nito.Linq 0.4 Beta (released 2010-04-15) Rx 1.0.2441.0 (rele...Nito.LINQ: Beta (v0.4): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2441.0, released 2010-04-15. Breaking changes IEnumerable<T>.Min and IEnum...N-Twill Twitter Client for VB.NET: NTWILL PROJECT 15-abril-2010: Este archivo es un Zip que contiene el proyecto hasta este punto editado... tiene algunas que otras funciones, pero lo cuelgo para tener un referen...PanBrowser: 1.2.0: added screensaver support, hit a key to exit, up/down keys cycle through images.PersianDateTimePicker, PersianMonthCalendar: PersianDatetimePicker,PersianMonthCalendar: PersianDatetimePicker,PersianMonthCalendar 1.1 releasesPokeIn Comet Ajax Library: PokeIn Library v03: New Features! Client MessageLog Management Added CrossBrowser Script Injector Added to work with non ajax supported browsers and activeX disabled...PokeIn Comet Ajax Library: PokeIn Sample VS09: Sample Project of PokeIn VS09. Contains version 0.3 of LibraryPokeIn Comet Ajax Library: PokeIn Sample VS10: Sample Visual Studio 10 project for PokeIn. Contains v03 Library of PokeInSilverlight Toolkit: April 2010: Suggestions? Features? Questions? Ask questions in our Silverlight Toolkit forum on Silverlight.net. The forum is the best resource for Silverlig...Software Localization Tool: SharpSLT 0.9: This is the first release of SharpSLT. Features Framework: .Net 3.5 UI language: English Works as standalone and External Tool for Visual C# Ex...Star Trooper for XNA 2D Tutorial: Lesson two content: Here is Lesson two original content for the StarTrooper 2D XNA tutorial. The blog tutorial has now started over on http://xna-uk.net/blogs/darkgen...StyleCop for ReSharper: StyleCop for ReSharper 5.0.14714.1: StyleCop for ReSharper 5.0.14714.1 o StyleCop 4.3.3.0 o ReSharper 5.0.1659.36 o Visual Studio 2008 / Visual Studio 2010 Installation Instructi...TaskUnZip for SSIS: TaskUnZip for SSIS 1.1.1.0 (beta 2): Ver. 1.1.1.0 (beta2) Bug: Correct installation bug. Add: Support SQL SERVER 2008 / 2005 Minor corrections Ver. 1.1.0.0 (Tnx to: Kevin Wendler)...TaskUnZip for SSIS: TaskUnZip for SSIS 1.2.0.0: Ver. 1.2.0.0 Add: Support SQL SERVER 2008 / 2005. Add: recursive compress.* Add: filter option for extract e compress file.* Add: Test archiv...Test Project (ignore): abcde: aaaaadaTR9: TR9 Beta: this setup is first release of the program.TR9: TR9 Source Code: TR9 Source CodeUsing PowerPivot to analyze MS Dynamics NAV: GL sql procedures and demo DB: For using sql procedures and creating DW database, please see Documentation.VivoSocial: VivoSocial 7.1.1: Version 7.1.1 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.1.1 rel...WPF Inspirational Quote Management System: Release 1.1.1: - Changed to only allow one running instance a time. - Custom icon in system tray popup removed for now as this breaks when text size is set to gr...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.6.1: Just a minor release to fix the problem with resource Uri generation in the C# Shadereffect files. ChangesThe Uri should be correctly generated no...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationIndustrial DashboardFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRas

    Read the article

  • CodePlex Daily Summary for Friday, October 18, 2013

    CodePlex Daily Summary for Friday, October 18, 2013Popular ReleasesVoya Media: Voya Media v. 1.2.255: Voya Media is free, open-source and provides one central place to play and organize all your music, pictures and videos. Voya Media | Twitter | Facebook | Google+ Features: Scans disk drives and network shares for valid media files. Generates playlist tags based on directory names. Supports MP3 tags and cover images. Supports built-in and external SRT/SUB subtitles. Gets TV Show episode details from the tvrage.com API. Gets Movie details from the themoviedb.org API. Supports Med...Aphid: Aphid 0.1.0.17 Alpha: Added several samples Fixed + operator bugpatterns & practices - Windows Azure Guidance: Cloud Design Patterns: 1st drop of Cloud Design Patterns project. It contains 14 patterns with 6 related guidance.Media Companion: Media Companion MC3.582b: New* Both - Added 'An' as option to ignore in title * Movie - Renaming - added %Z - Sorttitle to Legend * Movie - Renaming - added %O - Audio Channels to Legend * Movie - Remove a poster source from priority list. Reset List back to defaults. * Made Media Companion truly portable application. Fixed* Movie - browse for Poster Or Fanart, allows for jpg, tbn, png and bmp images * Movie - Alt Fanart Browser - Url or Browse window now fully visible. * Movie - Manual renaming checks if 'Use Fold...Json.NET: Json.NET 5.0 Release 8: Fix - Fixed not writing string quotes when QuoteName is falsePowerShell Community Extensions: 3.1 Production: PowerShell Community Extensions 3.1 Release NotesOct 17, 2013 This version of PSCX supports Windows PowerShell 3.0 and 4.0 See the ReleaseNotes.txt download above for more information.SQL Power Doc: Version 1.0.2.1: Misc. bug fixes Added logic to resolve members of a Windows Group server login Added columns to Excel workbooks to show definitions for server permissions, server roles, database permissions, and database rolesNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.8 Rel1: v2.3.8 Is now DNN6 and DNN7 compatible NOTE: NB_Store v2.3.8 is NOT compatible with DNN5. SOURCE CODE : https://github.com/leedavi/NB_Store (Source code has been moved to GitHub, due to issues with codeplex SVN and the inability to move easily to GIT on codeplex)Social Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyTerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...Collection Commander for Configuration Manager 2012: CMCollCtr 1.0.0: Change log: - MSI Setup - UI Improved - CM12 Console integration - New Powershell code snippets - Client Center IntegrationSandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...Fast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...Wsus Package Publisher: Release v1.3.1310.12: Allow the Update Creation Wizard to be set in full screen mode. Fix a bug which prevent WPP to Reset Remote Sus Client ID. Change the behavior of links in the Update Detail Viewer. Left-Click to open, Right-Click to copy to the Clipboard.WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.6.maint: I think this covers all of the issues. new additions: fixed the thumbnail problem for backgrounds. general clean up and error checking. need to get this put through the wringer and all feedback is welcome.BIDS Helper: BIDS Helper 1.6.4: This BIDS Helper release brings the following new features and fixes: New Features: A new Bus Matrix style report option when you run the Printer Friendly Dimension Usage report for an SSAS cube. The Biml engine is now fully in sync with the supported subset of Varigence Mist 3.4. This includes a large number of language enhancements, bugfixes, and project deployment support. Fixed Issues: Fixed Biml execution for project connections fixing a bug with Tabular Translations Editor not a...Free language translator and file converter: Free Language Translator 3.4: fixes for new version look up.New ProjectsA Program to Calculate the Sum of Two Numbers: Dennis Gaya - University of HertfordshireAllow users to change their Active Directory password from SharePoint 2013: Allow users to change their Active Directory password from SharePoint 2013 user menu contextAutoCAD Code Pack: AutoCADCodePack is a powerful library that help you to develop AutoCAD plugins using the AutoCAD .NET API.Core Engine: A game engine, written by engineers, for engineers.EmeraldForum: A simple forumFcdbas: FcdbasFreedom: ...High School Behaviour Log: A web based tool for recording student detentions, behaviour issues as well as referrals and positive behaviour. Integrated with school's SIMS.net MIS system.htmlcxx unicode version: thanks for Davi de Castro Reis and Robson Braga Ara?o's work. also thanks for Kasper Peeters 's tree.hhLets Learn DirectX 11.1 Series: Lets Learn DirectX 11.1 is a project run by Ernest Loveland to make knowledge about DirectX 11.1 development more public and freely available.Migrate CPPUnit to VS2013: migrate CPPUnit to VS2013Poll System: Poll system for educational purposePort Layer: ????????,???????R.M.B: JUST XIXIRecetas_LosBorbotones: Recetas Los BorbotonesScrolltastic-Skin: Imagine A Free DotNetNuke responsive Skin, Which Will Make Your Wishes Come True. Sharepoint and web service: This is Sharepoint 2010 project, after deploy it, you can use javascript (ajax) to access sharepoint list, call a custom web service. Everything are very easy!SourceSareProject: 00TestWebApi: WebApi serviceTFSBuildMonitor: Application that in its current shape let you monitor build queues and finished builds in an easier way than what is possible throug the built in Build ExplorerTigerProjects: This project is to create a online RSS reader similar as Google ReaderVottReader: It is project simple reader and viewer for http://vott.ru site.WinFormStudy: C#??

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • MonologFX: FLOSS JavaFX Dialogs for the Taking

    - by HecklerMark
    Some time back, I was searching for basic dialog functionality within JavaFX and came up empty. After finding a decent open-source offering on GitHub that almost fit the bill, I began using it...and immediately began thinking of ways to "do it differently."  :-)  Having a weekend to kill, I ended up creating DialogFX and releasing it on GitHub (hecklerm/DialogFX) for anyone who might find it useful. Shortly thereafter, it was incorporated into JFXtras (jfxtras.org) as well. Today I'm sharing a different, more flexible and capable JavaFX dialog called MonologFX that I've been developing and refining over the past few months. The summary of its progression thus far is pretty well captured in the README.md file I posted with the project on GitHub: After creating the DialogFX library for JavaFX, I received several suggestions and requests for additional or different functionality, some of which ran counter to the interfaces and/or intent of the DialogFX "way of doing things". Great ideas, but not completely compatible with the existing functionality. Wanting to incorporate these capabilities, I started over...incorporating some parts of DialogFX into the new MonologFX, as I called it, but taking it in a different direction when it seemed sensible to do so. In the meantime, the OpenJFX team has released dialog code that will be refined and eventually incorporated into JavaFX and OpenJFX. Rather than just scrap the MonologFX code or hoard it, I'm releasing it here on GitHub with the hope that someone may find it useful, interesting, or entertaining. You may never need it, but regardless, MonologFX is there for the taking. Things of Note So, what are some features of MonologFX? Four kinds of dialog boxes: ACCEPT (check mark icon), ERROR (red 'x'), INFO (blue "i"), and QUESTION (blue question mark) Button alignment configurable by developer: LEFT, RIGHT, or CENTER Skins/stylesheets support Shortcut key/mnemonics support (Alt-<key>) Ability to designate default (RETURN-key) and cancel (ESCAPE-key) buttons Built-in button types and labels for OK, CANCEL, ABORT, RETRY, IGNORE, YES, and NO Custom button types: CUSTOM1, CUSTOM2, CUSTOM3 Internationalization (i18n) built in. Currently, files are provided for English/US and Spanish/Spain locales; please share others and I'll add them! Icon support for your buttons, with or without text labels Fully Free/Libre Open Source Software (FLOSS), with latest source code & .jar always available at GitHub Quick Usage Overview Having an intense distaste for rough edges and gears flying when things break (!), I've tried to provide defaults for everything and "fail-safes" to avoid messy outcomes if some property isn't specified, etc. This also feeds the goal of making MonologFX as easy to use as possible, while retaining the library's full flexibility. Or at least that's the plan.  :-) You can hand-assemble your buttons and dialogs, but I've also included Builder classes to help move that along as well. Here are a couple examples:         MonologFXButton mlb = MonologFXButtonBuilder.create()                .defaultButton(true)                .icon(new ImageView(new Image(getClass().getResourceAsStream("dialog_apply.png"))))                .type(MonologFXButton.Type.OK)                .build();         MonologFXButton mlb2 = MonologFXButtonBuilder.create()                .cancelButton(true)                .icon(new ImageView(new Image(getClass().getResourceAsStream("dialog_cancel.png"))))                .type(MonologFXButton.Type.CANCEL)                .build();         MonologFX mono = MonologFXBuilder.create()                .modal(true)                .message("Welcome to MonologFX! Please feel free to try it out and share your thoughts.")                .titleText("Important Announcement")                .button(mlb)                .button(mlb2)                .buttonAlignment(MonologFX.ButtonAlignment.CENTER)                .build();         MonologFXButton.Type retval = mono.showDialog();         MonologFXButton mlb = MonologFXButtonBuilder.create()                .defaultButton(true)                .icon(new ImageView(new Image(getClass().getResourceAsStream("dialog_apply.png"))))                .type(MonologFXButton.Type.YES)                .build();         MonologFXButton mlb2 = MonologFXButtonBuilder.create()                .cancelButton(true)                .icon(new ImageView(new Image(getClass().getResourceAsStream("dialog_cancel.png"))))                .type(MonologFXButton.Type.NO)                .build();         MonologFX mono = MonologFXBuilder.create()                .modal(true)                .type(MonologFX.Type.QUESTION)                .message("Welcome to MonologFX! Does this look like it might be useful?")                .titleText("Important Announcement")                .button(mlb)                .button(mlb2)                .buttonAlignment(MonologFX.ButtonAlignment.RIGHT)                .build(); Extra Credit Thanks to everyone who offered ideas for improvement and/or extension to the functionality contained within DialogFX. The JFXtras team welcomed it into the fold, and while I doubt there will be a need to include MonologFX in JFXtras, team members Gerrit Grunwald & Jose Peredas Llamas volunteered templates and i18n expertise to make MonologFX what it is. Thanks for the push, guys! Where to Get (Git!) It If you'd like to check it out, point your browser to the MonologFX repository on GitHub. Full source code is there, along with the current .jar file. Please give it a try and share your thoughts! I'd love to hear from you. All the best,Mark

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part I)

    Ive spent the better part of the last two years doing nothing but K2 workflow development, which until very recently could only be done in Visual Studio 2005 so I am a bit behind the times. I seem to have skipped over using Visual Studio 2008 entirely, and I am now ready to stumble through all that Ive missed. Not that I will abandon my K2 ramblings, but I need to get back to some of the other technologies I am passionate about but havent had the option of working with them on a day-to-day basis as I have with K2 blackpearl. Specifically, I am going to be focusing my efforts on what is new in the Entity Framework and WPF in Visual Studio 2010, though you have to keep in mind that since I have skipped VS 2008, I may be giving VS 2010 credit for things that really have been around for a while (hey, if I havent seen it, it is new to me!). I have the following simple goals in mind for this exercise: Entity Framework Model an inherited class Entity Framework Model a lookup entity WPF Bind a list of entities WPF - on selection of an entity in the bound list, display values of the selected entity WPF For the lookup field, provide a dropdown of potential values to lookup All of these goals must be accomplished using as little code as possible, relying on the features we get out of the box in Visual Studio 2010. This isnt going to be rocket science here, Im not even looking to get or save this data from/to a data source, but I gotta start somewhere and hopefully it will grow into something more interesting. For this exercise, I am going to try to model some fictional data about football players and personnel (maybe turning this into some sort of NFL simulation game if I lose my job and can play with this all day), so Ill start with a Person class that has a name property, and extend that with a Player class to include a Position lookup property. The idea is that a Person can be a Player, Coach or whatever other personnel type may be associated with a football team but well only flesh out the Player aspect of a person for this. So to get started, I fired up Visual Studio 2010 and created a new WPF Application: To this project, I added a new ADO.NET Entity Data Model named PlayerModel (for now, not sure what will be an appropriate name so this may be revisited): I chose for it to be an empty model, as I dont have a database designed for this yet: Using the toolbox, I dragged out an entity for each of the items we identified earlier: Person, Player and Position, and gave them some simple properties (note that I kept the default Id property for each of them): Now to figure out how to link these things together the way I want to first, lets try to tell it that Player extends Person. I see that Inheritance is one of the items in the toolbox, but I cant seem to drag it out anywhere onto the canvas. However, when I right-click an element, I get the option to Add Inheritance to it, which gives us exactly what we want: Ok, now that we have that, how do we tell it that each player has a position? Well, despite association being in the toolbox, I have learned that you cant just drag and drop those elements so I right click Player and select Add -> Association to get the following dialog: I see the option here to Add foreign key properties to my entities Ive read somewhere this this is a new and highly-sought after feature so Ill see what it does. Selecting it includes a PositionId on the Player element for me, which seems pretty database-centric and I would like to see if I can live without it for now given that we also got the Position property out of this association. Ill bring it back into the fold if it ends up being useful later. Here is what we end up with now: Trying to compile this resulted in an error stating that the Player entity cannot have an Id, because the Person element it extends already has a property named Id. Makes sense, so I remove it and compile again. Success, but with a warning but success is a good thing so Ill pretend I didnt see that warning for now. It probably has to do with the fact that my Player entity is now pretty useless as it doesnt have any non-navigation properties. So things seem to match what we are going for, great now what the heck do we do with this? Lets switch gears and see what we get for free dealing with this model from the UI. Lets open up the MainWindow.xaml and see if we can connect to our entities as a data source. Hey, whats this? Have you read my mind, Visual Studio? Our entities are already listed in the Data Sources panel: I do notice, however, that our Player entity is missing. Is this due to that compilation warning? Ill add a bogus property to our player entity just to see if that is the case no, still no love. The warning reads: Error 2062: No mapping specified for instances of the EntitySet and AssociationSet in the EntityContainer PlayerModelContainer. Well if everything worked without any issues, then I wouldnt be stumbling through at all, so lets get to the bottom of this. My good friend google indicates that the warning is due to the model not being tied up to a database. Hmmm, so why dont Players show up in my data sources? A little bit of drill-down shows that they are, in fact, exposed under Positions: Well now that isnt quite what I want. While you could get to players through a position, it shouldnt be that way exclusively. Oh well, I can ignore that for now lets drag Players out onto the canvas after selecting List from the dropdown: Hey, what the heck? I wanted a list not a listview. Get rid of that list view that was just dropped, drop in a listbox and then drop the Players entity into it. That will bind it for us. Of course, there isnt any data to show, which brings us to the really hacky part of all this and that is to stuff some test data into our view source without actually getting it from any data source. To do this through code, we need to grab a reference to the positionsPlayersViewSource resource that was created for us when we dragged out our Players entity. We then set the source of that reference equal to a populated list of Players.  Well add a couple of players that way as well as a few positions via the positionsViewSource resource, and Ill ensure that each player has a position specified.  Ultimately, the code looks like this: System.Windows.Data.CollectionViewSource positionViewSource = ((System.Windows.Data.CollectionViewSource)(this.FindResource("positionsViewSource")));             List<Position> positions = new List<Position>();             Position newPosition = new Position();             newPosition.Id = 0;             newPosition.Name = "WR";             positions.Add(newPosition);             newPosition = new Position();             newPosition.Id = 1;             newPosition.Name = "RB";             positions.Add(newPosition);             newPosition = new Position();             newPosition.Id = 2;             newPosition.Name = "QB";             positions.Add(newPosition);             positionViewSource.Source = positions;             System.Windows.Data.CollectionViewSource playerViewSource = ((System.Windows.Data.CollectionViewSource)(this.FindResource("positionsPlayersViewSource")));             List<Player> players = new List<Player>();             Player newPlayer = new Player();             newPlayer.Id = 0;             newPlayer.Name = "Test Dude";             newPlayer.Position = positions[0];             players.Add(newPlayer);             newPlayer = new Player();             newPlayer.Id = 1;             newPlayer.Name = "Test Dude II";             newPlayer.Position = positions[1];             players.Add(newPlayer);             newPlayer = new Player();             newPlayer.Id = 2;             newPlayer.Name = "Test Dude III";             newPlayer.Position = positions[2];             players.Add(newPlayer);             playerViewSource.Source = players; Now that our views are being loaded with data, we can go about tying things together visually. Drop a text box (to show the selected players name) and a combo box (to show the selected players position). Drag the Positions entity from the data sources panel to the combo box to wire it up to the positions view source. Click the text box that was dragged, and find its Text property in the properties pane. There is a little glyph next to it that displays Advanced Properties when hovered over click this and then select Apply Data Binding. In the dialog that appears, we can select the current players name as the value to bind to: Similarly, we can wire up the combo boxs SelectedItem value to the current players position: When the application is executed and we navigate through the various players, we automatically get their name and position bound to the appropriate fields: All of this was accomplished with no code save for loading the test data, and I might add, it was pretty intuitive to do so via the drag and drop of entities straight from the data sources panel. So maybe all of this was old hat to you, but I was very impressed with this experience and I look forward to stumbling through the caveats of doing more complex data modeling and binding in this fashion. Next up, I suppose, will be figuring out how to get the entities to get real data from a data source instead of stuffing it with test data as well as trying to figure out why Players ended up being under Positions in the data sources panel.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • Dynamic object property populator (without reflection)

    - by grenade
    I want to populate an object's properties without using reflection in a manner similar to the DynamicBuilder on CodeProject. The CodeProject example is tailored for populating entities using a DataReader or DataRecord. I use this in several DALs to good effect. Now I want to modify it to use a dictionary or other data agnostic object so that I can use it in non DAL code --places I currently use reflection. I know almost nothing about OpCodes and IL. I just know that it works well and is faster than reflection. I have tried to modify the CodeProject example and because of my ignorance with IL, I have gotten stuck on two lines. One of them deals with dbnulls and I'm pretty sure I can just lose it, but I don't know if the lines preceding and following it are related and which of them will also need to go. The other, I think, is the one that pulled the value out of the datarecord before and now needs to pull it out of the dictionary. I think I can replace the "getValueMethod" with my "property.Value" but I'm not sure. I'm open to alternative/better ways of skinning this cat too. Here's the code so far (the commented out lines are the ones I'm stuck on): using System; using System.Collections.Generic; using System.Reflection; using System.Reflection.Emit; public class Populator<T> { private delegate T Load(Dictionary<string, object> properties); private Load _handler; private Populator() { } public T Build(Dictionary<string, object> properties) { return _handler(properties); } public static Populator<T> CreateBuilder(Dictionary<string, object> properties) { //private static readonly MethodInfo getValueMethod = typeof(IDataRecord).GetMethod("get_Item", new [] { typeof(int) }); //private static readonly MethodInfo isDBNullMethod = typeof(IDataRecord).GetMethod("IsDBNull", new [] { typeof(int) }); Populator<T> dynamicBuilder = new Populator<T>(); DynamicMethod method = new DynamicMethod("Create", typeof(T), new[] { typeof(Dictionary<string, object>) }, typeof(T), true); ILGenerator generator = method.GetILGenerator(); LocalBuilder result = generator.DeclareLocal(typeof(T)); generator.Emit(OpCodes.Newobj, typeof(T).GetConstructor(Type.EmptyTypes)); generator.Emit(OpCodes.Stloc, result); int i = 0; foreach (var property in properties) { PropertyInfo propertyInfo = typeof(T).GetProperty(property.Key, BindingFlags.Public | BindingFlags.Instance | BindingFlags.IgnoreCase | BindingFlags.FlattenHierarchy | BindingFlags.Default); Label endIfLabel = generator.DefineLabel(); if (propertyInfo != null && propertyInfo.GetSetMethod() != null) { generator.Emit(OpCodes.Ldarg_0); generator.Emit(OpCodes.Ldc_I4, i); //generator.Emit(OpCodes.Callvirt, isDBNullMethod); generator.Emit(OpCodes.Brtrue, endIfLabel); generator.Emit(OpCodes.Ldloc, result); generator.Emit(OpCodes.Ldarg_0); generator.Emit(OpCodes.Ldc_I4, i); //generator.Emit(OpCodes.Callvirt, getValueMethod); generator.Emit(OpCodes.Unbox_Any, property.Value.GetType()); generator.Emit(OpCodes.Callvirt, propertyInfo.GetSetMethod()); generator.MarkLabel(endIfLabel); } i++; } generator.Emit(OpCodes.Ldloc, result); generator.Emit(OpCodes.Ret); dynamicBuilder._handler = (Load)method.CreateDelegate(typeof(Load)); return dynamicBuilder; } } EDIT: Using Marc Gravell's PropertyDescriptor implementation (with HyperDescriptor) the code is simplified a hundred-fold. I now have the following test: using System; using System.Collections.Generic; using System.ComponentModel; using Hyper.ComponentModel; namespace Test { class Person { public int Id { get; set; } public string Name { get; set; } } class Program { static void Main() { HyperTypeDescriptionProvider.Add(typeof(Person)); var properties = new Dictionary<string, object> { { "Id", 10 }, { "Name", "Fred Flintstone" } }; Person person = new Person(); DynamicUpdate(person, properties); Console.WriteLine("Id: {0}; Name: {1}", person.Id, person.Name); Console.ReadKey(); } public static void DynamicUpdate<T>(T entity, Dictionary<string, object> properties) { foreach (PropertyDescriptor propertyDescriptor in TypeDescriptor.GetProperties(typeof(T))) if (properties.ContainsKey(propertyDescriptor.Name)) propertyDescriptor.SetValue(entity, properties[propertyDescriptor.Name]); } } } Any comments on performance considerations for both TypeDescriptor.GetProperties() & PropertyDescriptor.SetValue() are welcome...

    Read the article

  • nivo slider and drop down menu doesnt work in IE

    - by venom
    Does anyone has any idea why drop down menu in IE disappear under nivo slider? tried to play with z-index, didn't help, i also know that drop down menus dissappear under flash content, but this is not the case(wmode=transparent) as far as i know the nivo slider uses just jquery, no flash. here is the html: <table> <tr height="50"><td colspan="2" align="right" class="bottom_menu"> <ul id="nav" class="dropdown dropdown-horizontal" > <li><a href="/index.cfm?fuseaction=home.logout" class="dir" style="border:0 !important;" >Çikis</a></li> <li><a href="/index.cfm?fuseaction=objects2.list_basket" class="dir">Sepetim</a></li> <li><a href="/index.cfm?fuseaction=objects2.me" class="dir">Sirketim</a> <ul> <li><a href="/index.cfm?fuseaction=objects2.list_opportunities">Firsatlar</a></li> <li><a href="/index.cfm?fuseaction=objects2.form_add_partner">Sirkete Kullanici Ekle</a></li> <li><a href="/index.cfm?fuseaction=objects2.form_upd_my_company">Kullanici Yönetimi</a></li> <li><a href="/index.cfm?fuseaction=objects2.list_analyses">Analizler</a></li> <li><a href="/index.cfm?fuseaction=objects2.list_extre">Hesap Ekstresi</a></li> <li><a href="/index.cfm?fuseaction=objects2.popup_add_online_pos" target="_blank">Sanal Pos</a></li> </ul> </li> </ul> </td></tr> </table> <div id="banner"> <img src="/documents/templates/projedepo/l_top.gif" style="z-index:1;position:absolute; left:0; top:0;" width="24px" height="24px" border="0" /> <img src="/documents/templates/projedepo/r_top.gif" style="z-index:1;position:absolute; right:0; top:0;" width="24px" height="24px" border="0" /> <img src="/documents/templates/projedepo/l_bottom.gif" style="z-index:1;position:absolute; left:0; bottom:0;" width="24px" height="24px" border="0" /> <img src="/documents/templates/projedepo/r_bottom.gif" style="z-index:1;position:absolute; right:0; bottom:0;" width="24px" height="24px" border="0" /> <div class="banner_img"> <link rel="stylesheet" href="/documents/templates/projedepo/banner/nivo-slider.css" type="text/css" media="screen" /> <link rel="stylesheet" href="/documents/templates/projedepo/banner/style.css" type="text/css" media="screen" /> <div id="slider" class="nivoSlider"> <img title="#1" src="/documents/templates/projedepo/banner/canon.jpg" alt="" /> <img title="#2" src="/documents/templates/projedepo/banner/indigovision.jpg" alt="" /> </div> <div id="1" class="nivo-html-caption"> <a href="/index.cfm?fuseaction=objects2.detail_product&product_id=612&stock_id=612"><img src="/documents/templates/projedepo/banner/daha_fazlasi.jpg" border="0" /></a> </div> <div id="2" class="nivo-html-caption"> <a href="/index.cfm?fuseaction=objects2.detail_product&product_id=630&stock_id=630"><img src="/documents/templates/projedepo/banner/daha_fazlasi.jpg" border="0" /></a> </div> <script type="text/javascript" src="/JS/jquery.nivo.slider.pack.js"></script> <script type="text/javascript"> $(window).load(function() { $('#slider').nivoSlider({ effect:'random', //Specify sets like: 'fold,fade,sliceDown' slices:15, animSpeed:1000, //Slide transition speed pauseTime:10000, startSlide:0, //Set starting Slide (0 index) directionNav:true, //Next & Prev directionNavHide:true, //Only show on hover controlNav:true, //1,2,3... controlNavThumbs:false, //Use thumbnails for Control Nav controlNavThumbsFromRel:false, //Use image rel for thumbs controlNavThumbsSearch: '.jpg', //Replace this with... controlNavThumbsReplace: '_thumb.jpg', //...this in thumb Image src keyboardNav:true, //Use left & right arrows pauseOnHover:true, //Stop animation while hovering manualAdvance:false, //Force manual transitions captionOpacity:1.0, //Universal caption opacity beforeChange: function(){}, afterChange: function(){}, slideshowEnd: function(){}, //Triggers after all slides have been shown lastSlide: function(){}, //Triggers when last slide is shown afterLoad: function(){} //Triggers when slider has loaded }); }); </script> </div> </div> Here is css for dropdown menu: http://www.micae.com/documents/templates/projedepo/default.css http://www.micae.com/documents/templates/projedepo/default.advanced.css http://www.micae.com/documents/templates/projedepo/dropdown.css and for nivo slider: http://www.micae.com/documents/templates/projedepo/banner/style.css http://www.micae.com/documents/templates/projedepo/banner/nivo-slider.css and for banner divs: #banner { position:relative; width:980px; height:435px; background:#fff; margin-bottom:20px; margin-top:-1px; color:#000; z-index:60; } .banner_img { padding:8px;position:absolute;z-index:2; } and the javascript by default, jquery and nivo slider http://www.micae.com/JS/jquery.nivo.slider.pack.js

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • org-sort multi: date/time (?d ?t) | priority (?p) | title (?a)

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: I believe that org-sort by deadline (?d) has a bug that cannot properly handle undated tasks. I am working on a workaround (i.e., moving the undated todo to a different heading before the deadline (?d) sort occurs), but perhaps the best thing to do would be to try and fix the original sorting function. Development of the workaround can be found in this thread (i.e., moving the undated tasks to a different heading in one fell swoop): How to automate org-refile for multiple todo EDIT: Apparently, the following code (ancient history) that I found on the internet was eventually modified and included as a part of org-sort-entries. Unfortunately, undated todo are not properly sorted when sorting by deadline -- i.e., they are mixed in with the dated todo. ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) ))))) EDIT: Here is a more modern version of multi-sort, which is likely based upon further development of the above-code: (defun org-sort-all () (interactive) (save-excursion (goto-char (point-min)) (while (re-search-forward "^\* " nil t) (goto-char (match-beginning 0)) (condition-case err (progn (org-sort-entries t ?a) (org-sort-entries t ?p) (org-sort-entries t ?o) (forward-line)) (error nil))) (goto-char (point-min)) (while (re-search-forward "\* PROJECT " nil t) (goto-char (line-beginning-position)) (ignore-errors (org-sort-entries t ?a) (org-sort-entries t ?p) (org-sort-entries t ?o)) (forward-line)))) EDIT: The best option will be to fix sorting of deadlines (?d) so that undated todo are moved to the bottom of the outline, instead of mixed in with the dated todo. Here is an excerpt from the current org.el included within Emacs Trunk (as of July 1, 2013): (defun org-sort (with-case) "Call `org-sort-entries', `org-table-sort-lines' or `org-sort-list'. Optional argument WITH-CASE means sort case-sensitively." (interactive "P") (cond ((org-at-table-p) (org-call-with-arg 'org-table-sort-lines with-case)) ((org-at-item-p) (org-call-with-arg 'org-sort-list with-case)) (t (org-call-with-arg 'org-sort-entries with-case)))) (defun org-sort-remove-invisible (s) (remove-text-properties 0 (length s) org-rm-props s) (while (string-match org-bracket-link-regexp s) (setq s (replace-match (if (match-end 2) (match-string 3 s) (match-string 1 s)) t t s))) s) (defvar org-priority-regexp) ; defined later in the file (defvar org-after-sorting-entries-or-items-hook nil "Hook that is run after a bunch of entries or items have been sorted. When children are sorted, the cursor is in the parent line when this hook gets called. When a region or a plain list is sorted, the cursor will be in the first entry of the sorted region/list.") (defun org-sort-entries (&optional with-case sorting-type getkey-func compare-func property) "Sort entries on a certain level of an outline tree. If there is an active region, the entries in the region are sorted. Else, if the cursor is before the first entry, sort the top-level items. Else, the children of the entry at point are sorted. Sorting can be alphabetically, numerically, by date/time as given by a time stamp, by a property or by priority. The command prompts for the sorting type unless it has been given to the function through the SORTING-TYPE argument, which needs to be a character, \(?n ?N ?a ?A ?t ?T ?s ?S ?d ?D ?p ?P ?o ?O ?r ?R ?f ?F). Here is the precise meaning of each character: n Numerically, by converting the beginning of the entry/item to a number. a Alphabetically, ignoring the TODO keyword and the priority, if any. o By order of TODO keywords. t By date/time, either the first active time stamp in the entry, or, if none exist, by the first inactive one. s By the scheduled date/time. d By deadline date/time. c By creation time, which is assumed to be the first inactive time stamp at the beginning of a line. p By priority according to the cookie. r By the value of a property. Capital letters will reverse the sort order. If the SORTING-TYPE is ?f or ?F, then GETKEY-FUNC specifies a function to be called with point at the beginning of the record. It must return either a string or a number that should serve as the sorting key for that record. Comparing entries ignores case by default. However, with an optional argument WITH-CASE, the sorting considers case as well." (interactive "P") (let ((case-func (if with-case 'identity 'downcase)) (cmstr ;; The clock marker is lost when using `sort-subr', let's ;; store the clocking string. (when (equal (marker-buffer org-clock-marker) (current-buffer)) (save-excursion (goto-char org-clock-marker) (looking-back "^.*") (match-string-no-properties 0)))) start beg end stars re re2 txt what tmp) ;; Find beginning and end of region to sort (cond ((org-region-active-p) ;; we will sort the region (setq end (region-end) what "region") (goto-char (region-beginning)) (if (not (org-at-heading-p)) (outline-next-heading)) (setq start (point))) ((or (org-at-heading-p) (condition-case nil (progn (org-back-to-heading) t) (error nil))) ;; we will sort the children of the current headline (org-back-to-heading) (setq start (point) end (progn (org-end-of-subtree t t) (or (bolp) (insert "\n")) (org-back-over-empty-lines) (point)) what "children") (goto-char start) (show-subtree) (outline-next-heading)) (t ;; we will sort the top-level entries in this file (goto-char (point-min)) (or (org-at-heading-p) (outline-next-heading)) (setq start (point)) (goto-char (point-max)) (beginning-of-line 1) (when (looking-at ".*?\\S-") ;; File ends in a non-white line (end-of-line 1) (insert "\n")) (setq end (point-max)) (setq what "top-level") (goto-char start) (show-all))) (setq beg (point)) (if (>= beg end) (error "Nothing to sort")) (looking-at "\\(\\*+\\)") (setq stars (match-string 1) re (concat "^" (regexp-quote stars) " +") re2 (concat "^" (regexp-quote (substring stars 0 -1)) "[ \t\n]") txt (buffer-substring beg end)) (if (not (equal (substring txt -1) "\n")) (setq txt (concat txt "\n"))) (if (and (not (equal stars "*")) (string-match re2 txt)) (error "Region to sort contains a level above the first entry")) (unless sorting-type (message "Sort %s: [a]lpha [n]umeric [p]riority p[r]operty todo[o]rder [f]unc [t]ime [s]cheduled [d]eadline [c]reated A/N/P/R/O/F/T/S/D/C means reversed:" what) (setq sorting-type (read-char-exclusive)) (and (= (downcase sorting-type) ?f) (setq getkey-func (org-icompleting-read "Sort using function: " obarray 'fboundp t nil nil)) (setq getkey-func (intern getkey-func))) (and (= (downcase sorting-type) ?r) (setq property (org-icompleting-read "Property: " (mapcar 'list (org-buffer-property-keys t)) nil t)))) (message "Sorting entries...") (save-restriction (narrow-to-region start end) (let ((dcst (downcase sorting-type)) (case-fold-search nil) (now (current-time))) (sort-subr (/= dcst sorting-type) ;; This function moves to the beginning character of the "record" to ;; be sorted. (lambda nil (if (re-search-forward re nil t) (goto-char (match-beginning 0)) (goto-char (point-max)))) ;; This function moves to the last character of the "record" being ;; sorted. (lambda nil (save-match-data (condition-case nil (outline-forward-same-level 1) (error (goto-char (point-max)))))) ;; This function returns the value that gets sorted against. (lambda nil (cond ((= dcst ?n) (if (looking-at org-complex-heading-regexp) (string-to-number (match-string 4)) nil)) ((= dcst ?a) (if (looking-at org-complex-heading-regexp) (funcall case-func (match-string 4)) nil)) ((= dcst ?t) (let ((end (save-excursion (outline-next-heading) (point)))) (if (or (re-search-forward org-ts-regexp end t) (re-search-forward org-ts-regexp-both end t)) (org-time-string-to-seconds (match-string 0)) (org-float-time now)))) ((= dcst ?c) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward (concat "^[ \t]*\\[" org-ts-regexp1 "\\]") end t) (org-time-string-to-seconds (match-string 0)) (org-float-time now)))) ((= dcst ?s) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward org-scheduled-time-regexp end t) (org-time-string-to-seconds (match-string 1)) (org-float-time now)))) ((= dcst ?d) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward org-deadline-time-regexp end t) (org-time-string-to-seconds (match-string 1)) (org-float-time now)))) ((= dcst ?p) (if (re-search-forward org-priority-regexp (point-at-eol) t) (string-to-char (match-string 2)) org-default-priority)) ((= dcst ?r) (or (org-entry-get nil property) "")) ((= dcst ?o) (if (looking-at org-complex-heading-regexp) (- 9999 (length (member (match-string 2) org-todo-keywords-1))))) ((= dcst ?f) (if getkey-func (progn (setq tmp (funcall getkey-func)) (if (stringp tmp) (setq tmp (funcall case-func tmp))) tmp) (error "Invalid key function `%s'" getkey-func))) (t (error "Invalid sorting type `%c'" sorting-type)))) nil (cond ((= dcst ?a) 'string<) ((= dcst ?f) compare-func) ((member dcst '(?p ?t ?s ?d ?c)) '<))))) (run-hooks 'org-after-sorting-entries-or-items-hook) ;; Reset the clock marker if needed (when cmstr (save-excursion (goto-char start) (search-forward cmstr nil t) (move-marker org-clock-marker (point)))) (message "Sorting entries...done"))) (defun org-do-sort (table what &optional with-case sorting-type) "Sort TABLE of WHAT according to SORTING-TYPE. The user will be prompted for the SORTING-TYPE if the call to this function does not specify it. WHAT is only for the prompt, to indicate what is being sorted. The sorting key will be extracted from the car of the elements of the table. If WITH-CASE is non-nil, the sorting will be case-sensitive." (unless sorting-type (message "Sort %s: [a]lphabetic, [n]umeric, [t]ime. A/N/T means reversed:" what) (setq sorting-type (read-char-exclusive))) (let ((dcst (downcase sorting-type)) extractfun comparefun) ;; Define the appropriate functions (cond ((= dcst ?n) (setq extractfun 'string-to-number comparefun (if (= dcst sorting-type) '< '>))) ((= dcst ?a) (setq extractfun (if with-case (lambda(x) (org-sort-remove-invisible x)) (lambda(x) (downcase (org-sort-remove-invisible x)))) comparefun (if (= dcst sorting-type) 'string< (lambda (a b) (and (not (string< a b)) (not (string= a b))))))) ((= dcst ?t) (setq extractfun (lambda (x) (if (or (string-match org-ts-regexp x) (string-match org-ts-regexp-both x)) (org-float-time (org-time-string-to-time (match-string 0 x))) 0)) comparefun (if (= dcst sorting-type) '< '>))) (t (error "Invalid sorting type `%c'" sorting-type))) (sort (mapcar (lambda (x) (cons (funcall extractfun (car x)) (cdr x))) table) (lambda (a b) (funcall comparefun (car a) (car b))))))

    Read the article

< Previous Page | 9 10 11 12 13 14  | Next Page >