Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 179/274 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Best Platform/Engine for turn based Client/Server Android game

    - by Paradine
    I'm currently designing a turn based game for tablets. Initially for Android with porting to iOS later considered in design. I'm having trouble narrowing down the available technologies to even know where to spend my research time. I am hoping that if I explain what I am trying to achieve someone may be able to suggest a platform and/or engine. I've looked into some of the open source Engines ( http://www.cuteandroid.com/ten-open-source-android-2d-or-3d-game-engine-for-android-developers ) and some appear to handle much of what I might require - although with a higher focus on graphics than i need. Mages looks interesting although development appears to have ceased. If I could somehow leverage GoogleApps that would be excellent. Here is what I am trying to achieve: PvP turn based strategy game over internet - minimal animation and bandwidth required Players match up online using MetaGame system MatchID created on Resolution Server and Game starts Clients have 30 second countdown to select MoveString Clients sends small secure timestamped and MatchIDed MoveString to Resolution server Resolution server looks up Move String for each player, Resolves and Updates Players status in MatchID on Server Resolution server updates Client Views Repeat until victory conditions met - MatchID Closed, Rewards earned in MetaGame There will also need to be a full social and account system and metagame backend - but this could be running on separate system(s) Tablet in Offline mode would be catalog browsing and perhaps single player AI - bum I'm focusing on the Resolution Server at this point I'm not even certain if I would be looking at an Android App or a WebApp at this stage! I want a custom GUI so I guess an app - but maybe as I have little animation a WebApp might also work. Probably some combination of both. There will be very small overhead in data between client server - essentially a small text string every 30 seconds sent to the Resolution server which looks up the Effect and applies it to the Opponents string and determines some results to apply to the match. The client view is updated minimally with the results (only 5 in game Integers tracked) - perhaps triggering small animations/popups on the client to show the end result. e.g Explosion. If you have suggestions for a good technology or platform to best achieving the Resolution Server I'd love to hear. Also if you have experience with open source Engines - and could narrow down which (if any ) might be most suitable that would be a big help. Thanks in advance

    Read the article

  • Architecture strategies for a complex competition scoring system

    - by mikewassmer
    Competition description: There are about 10 teams competing against each other over a 6-week period. Each team's total score (out of a 1000 total available points) is based on the total of its scores in about 25,000 different scoring elements. Most scoring elements are worth a small fraction of a point and there will about 10 X 25,000 = 250,000 total raw input data points. The points for some scoring elements are awarded at frequent regular time intervals during the competition. The points for other scoring elements are awarded at either irregular time intervals or at just one moment in time. There are about 20 different types of scoring elements. Each of the 20 types of scoring elements has a different set of inputs, a different algorithm for calculating the earned score from the raw inputs, and a different number of total available points. The simplest algorithms require one input and one simple calculation. The most complex algorithms consist of hundreds or thousands of raw inputs and a more complicated calculation. Some types of raw inputs are automatically generated. Other types of raw inputs are manually entered. All raw inputs are subject to possible manual retroactive adjustments by competition officials. Primary requirements: The scoring system UI for competitors and other competition followers will show current and historical total team scores, team standings, team scores by scoring element, raw input data (at several levels of aggregation, e.g. daily, weekly, etc.), and other metrics. There will be charts, tables, and other widgets for displaying historical raw data inputs and scores. There will be a quasi-real-time dashboard that will show current scores and raw data inputs. Aggregate scores should be updated/refreshed whenever new raw data inputs arrive or existing raw data inputs are adjusted. There will be a "scorekeeper UI" for manually entering new inputs, manually adjusting existing inputs, and manually adjusting calculated scores. Decisions: Should the scoring calculations be performed on the database layer (T-SQL/SQL Server, in my case) or on the application layer (C#/ASP.NET MVC, in my case)? What are some recommended approaches for calculating updated total team scores whenever new raw inputs arrives? Calculating each of the teams' total scores from scratch every time a new input arrives will probably slow the system to a crawl. I've considered some kind of "diff" approach, but that approach may pose problems for ad-hoc queries and some aggegates. I'm trying draw some sports analogies, but it's tough because most games consist of no more than 20 or 30 scoring elements per game (I'm thinking of a high-scoring baseball game; football and soccer have fewer scoring events per game). Perhaps a financial balance sheet analogy makes more sense because financial "bottom line" calcs may be calculated from 250,000 or more transactions. Should I be making heavy use of caching for this application? Are there any obvious approaches or similar case studies that I may be overlooking?

    Read the article

  • Atheros AR9285 / Lenovo G560 wireless not working after installing 13.04

    - by teyi
    I had Ubuntu 12.04 initially installed on my laptop. I upgraded to 12.10 then 13.04. Everything worked fine, including wireless. After adding a new memory card ( I only had 2 gb and one memory slot free) my wireess stopped working. I backed up all my data and reinstallled Ubuntu 13.04. Everything works fine except wireess. I bought this laptop in 2010 from Japan. It has Intel Core i5 CPU M 450 @2.40 Ghz * 4 3,7 Gb RAM os type 64 bit The output of iwconfig: eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off The output of rfkill list all: 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no The output of lshw -C network: *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:05:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:7d:fe:fa width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.8.0-19-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:d6400000-d640ffff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:2b:36:ac size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff The wi-fi network appears as disconnected ( it's greyed out) Strangely enough I see a wifi network ( not mine) but not mine or the rest. That network doesn't require a password . I click on it, try to connect and i get an error message: failed to connect to xxxxx ... 32) The access point/org/freedesktop/NetworkManager/AccessPoint/0 was not in the scan list. Someone help please

    Read the article

  • Making the WPeFfort

    - by Laila
    Microsoft Visual Studio 2010 will be launched on April 12th. The basic layout looks pretty much as it did, so it is not immediately obvious on first inspection that it was completely rewritten in the Windows Presentation Foundation (WPF). The current VS 2008 codebase had reached the end of its life; It was getting slow to initialize and sluggish to run, and was never going to allow for multi-monitor support or easier extensibility. It can't have been an easy decision to rewrite Visual Studio, but the gamble seems to have paid off. Although certain bugs in the betas caused some anxiety about performance, these seem to have been fixed, and the new Visual Studio is definitely faster. In rewriting the codebase, it has been possible to make obvious improvements, such as being able to run different windows on different monitors, and you only being presented with the Toolbox controls and References that are appropriate to your target .NET version. There is also an IntelliTrace debugger, and Intellisense has been improved by virtue of separating a 'Suggestion Mode' and 'Completion Mode' (with its 'Generate From.' 'Highlight References.', and 'Navigate to...' features). At the same time, there has been quite a clearout; Certain features that had been tucked away in the previous versions, such as Brief or Emacs emulation support, have been dropped. (Yes, they were being used!) There are a lot of features that didn't require the rewrite, but are welcome. It is now easier to develop WPF applications (e.g. drag-and-drop Databinding), and there is support for Azure. There are more, and better templates and the design tools are greatly improved (e.g. Expression Web, Expression Blend, WPF Sketchflow, Silverlight designer, Document Map Margin and Inline Call Hierarchy). Sharepoint is better supported, and Office apps will benefit from C#'s support of optional and named arguments, and allowing several Office Solutions within a Deployment package. Most importantly, it is a vote of confidence in the WPF. VS 2010 is the essential missing component that has been impeding the faster adoption of WPF. The fact that it is actually now written in WPF should now reassure the doubters, and convince more developers to make the move from WinForms to WPF. In using WPF, the developers of Visual Studio have had the clout to fix some issues which have been bothering WPF developers for some time (such as blurred text). Do you see a brighter future as a result of transferring from WinForms to WPF? I'd love to know what you think. Cheers, Laila

    Read the article

  • Recap: Oracle at the Gartner Business Intelligence Summit

    - by kimberly.billings
    Getting to Vegas was no fun. As anyone who lives in the Bay Area knows, the SF airport shuts down one runway when it rains, causing major havoc. So rain, rain, rain on Sunday meant delay, delay, delay at the airport. Needless to say, my 6:30 pm flight didn't land in Vegas until 3:00 am! But the travel pains were worth it. There was a lot to be learned at the Gartner BI Summit this year, and the uptick in attendance was reflected in strong booth traffic and engaging conversations in the Oracle booth. Oracle customer, Dawn Conant, Director, Business Intelligence at Beckman Coulter, generated a lot of interest in her presentation about migrating from Business Objects to Oracle Business Intelligence, Enterprise Edition with Oracle Database 11g. Dawn's story was a very relatable one, as many of the attendees had plans for similar projects. One of the most interesting Gartner-led sessions compared BI/DW megavendors, IBM, Oracle, SAP and Microsoft. According to Gartner analyst Rita Sallam, these megavendors control about two-thirds of the BI market. Sallem attributes this in part to the fact that organizations are expanding their definitions of BI to also include analytics and performance management. In doing so, they require greater integration of BI applications with a broader set of applications and middleware. In a related session, a panel of Gartner analysts compared the Magic Quadrants for BI Platforms; CPM; Data Quality; Data Integration Tools; and Data Warehouses. Oracle is a leader in all of the Magic Quadrants in which it participates and has the most complete stack including hardware and software, according to Donald Feinberg. Feinberg also commented that in situations with VLDW and solid mixed workloads, Oracle Exadata is making a big difference! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart?

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

  • Does using GCC specific builtins qualify as incorporation within a project?

    - by DavidJFelix
    I understand that linking to a program licensed under the GPL requires that you release the source of your program under the GPL as well, while the LGPL does not require this. The terminology of the (L)GPL is very clear about this. #include "gpl_program.h" means you'd have to license GPL, because you're linking to GPL licensed code. And #include "lgpl_program.h" means you're free to license however you want, so that it doesn't explicitly prohibit linking to LGPL source. Now, my question about what isn't clear is: [begin question] GCC is GPL licensed, compiling with GCC, does not constitute "integration" into your program, as the GPL puts it; does using builtin functions (which are specific to GCC) constitute "incorporation" even though you haven't explicitly linked to this GPL licensed code? My intuition tells me that this isn't the intention, but legality isn't always intuitive. I'm not actually worried, but I'm curious if this could be considered the case. [end question] [begin aside] The reason for my equivocation is that GCC builtins like __builtin_clzl() or __builtin_expect() are an API technically and could be implemented in another way. For example, many builtins were replicated by LLVM and the argument could be made that it's not implementation specific to GCC. However, many builtins have no parallel and when compiled will link GPL licensed code in GCC and will not compile on other compilers. If you make the argument here that the API could be replicated by another compiler, couldn't you make that identical claim about any program you link to, so long as you don't distribute that source? I understand that I'm being a legal snake about this, but it strikes me as odd that the GPL isn't more specific. I don't see this as a reasonable ploy for proprietary software creators to bypass the GPL, as they'd have to bundle the GPL software to make it work, removing their plausible deniability. However, isn't it possible that if builtins don't constitute linking, then open source proponents who oppose the GPL could simply write a BSD/MIT/Apache/Apple licensed product that links to a GPL'd program and claim that they intend to write a non-GPL interface that is identical to the GPL one, preserving their BSD license until it's actually compiled? [end aside] Sorry for the aside, I didn't think many people would follow why I care about this if I'm not facing any legal trouble or implications. Don't worry too much about the hypotheticals there, I'm just extrapolating what either answer to my actual question could imply.

    Read the article

  • PHP MVC error handling, view display and user permissions

    - by cen
    I am building a moderation panel from scratch in a MVC approach and a lot of questions cropped up during development. I would like to hear from others how they handle these situations. Error handling Should you handle an error inside the class method or should the method return something anyway and you handle the error in controller? What about PDO exceptions, how to handle them? For example, let's say we have a method that returns true if the user exists in a table and false if he does not exist. What do you return in the catch statement? You can't just return false because then the controller assumes that everything is alright while the truth is that something must be seriously broken. Displaying the error from the method completely breaks the whole design. Maybe a page redirect inside the method? The proper way to show a view The controller right now looks something like this: include('view/header.php'); if ($_GET['m']=='something') include('view/something.php'); elseif ($_GET['m']=='somethingelse') include('view/somethingelse.php'); include('view/foter.php'); Each view also checks if it was included from the index page to prevent it being accessed directly. There is a view file for each different document body. Is this way of including different views ok or is there a more proper way? Managing user rights Each user has his own rights, what he can see and what he can do. Which part of the system should verify that user has the permission to see the view, controller or view itself? Right now I do permission checks directly in the view because each view can contain several forms that require different permissions and I would need to make a seperate file for each of them if it was put in the controller. I also have to re-check for the permissions everytime a form is submitted because form data can be easily forged. The truth is, all this permission checking and validating the inputs just turns the controller into a huge if/then/else cluster. I feel like 90% of the time I am doing error checks/permissions/validations and very little of the actual logic. Is this normal even for popular frameworks?

    Read the article

  • Amazon.com Cutting Off Colorado Affiliates

    - by Joe Mayo
    I received an email from Amazon.com today, essentially cutting off my affiliate status because I'm in Colorado. Colorado recently passed legislation that requires retailers to either collect sales tax for on-line transactions or engage in an onerous process that makes you wish you had collected sales tax.  After I Tweeted this, Mike Jones tweeted a link to the legislation.  Here's an excerpt from Amazon.com's email: "Dear Colorado-based Amazon Associate: We are writing from the Amazon Associates Program to inform you that the Colorado government recently enacted a law to impose sales tax regulations on online retailers. The regulations are burdensome and no other state has similar rules. The new regulations do not require online retailers to collect sales tax. Instead, they are clearly intended to increase the compliance burden to a point where online retailers will be induced to "voluntarily" collect Colorado sales tax -- a course we won't take. We and many others strongly opposed this legislation, known as HB 10-1193, but it was enacted anyway. Regrettably, as a result of the new law, we have decided to stop advertising through Associates based in Colorado. We plan to continue to sell to Colorado residents, however, and will advertise through other channels, including through Associates based in other states. There is a right way for Colorado to pursue its revenue goals, but this new law is a wrong way. As we repeatedly communicated to Colorado legislators, including those who sponsored and supported the new law, we are not opposed to collecting sales tax within a constitutionally-permissible system applied even-handedly. The US Supreme Court has defined what would be constitutional, and if Colorado would repeal the current law or follow the constitutional approach to collection, we would welcome the opportunity to reinstate Colorado-based Associates. You may express your views of Colorado's new law to members of the General Assembly and to Governor Ritter, who signed the bill. Your Associates account has been closed as of March 8, 2010, and we will no longer pay advertising fees for customers you refer to Amazon.com after that date. Please be assured that all qualifying advertising fees earned prior to March 8, 2010, will be processed and paid in accordance with our regular payment schedule. Based on your account closure date of March 8, any final payments will be paid by May 31, 2010. We have enjoyed working with you and other Colorado-based participants in the Amazon Associates Program, and wish you all the best in your future.   Best Regards,   The Amazon Associates Team"

    Read the article

  • Is my computer slow due to lack of swap

    - by Kristian Jensen
    A few months ago, I installed Ubuntu 12.04 alongside with Windows 7 on my Asus EEE-PC 1015bx. It has a tendency of freezing and when trying to investigate I found that a swap partition of only 256 MB had been created. The Asus EEE-PC 1015bx is born with 1 GByte RAM only and it is not possible to add further or exchange the existing 1 GByte with a larger card. When looking at the system monitor, it looks like all swap is being utilized along with 70-75% of the RAM, even with very few applications running. Can the lack of much swap space be the reason for my computer running slowly and at times freezing? How can I add a swap partition? Or should I add a swap file instead? At the moment, I see two partitions when viewing the system monitor: one 28.6 GByte ext4 partition which must be the one containing Ubuntu and one 100 GByte fuseblk partition which I assume is the one holding Windows. It shows that I have 18.6 GByte free space on the ext4 partition. Can I "take a bite" from the ext4 partition and convert this into a swap partition? I was thinking something like 3 GBytes for swap considering my limited RAM. I hope that someone can guide me through. Thank you. 20th Oct 2012 - Further details Thank you for below answer which I find very useful. I am certainly considering switching to one of your suggested shells as I can see from the Internet that many have posted that these require much fewer resources than ubuntu. It seems to me that lubuntu is the perfect match for my very limited computer. I will have to wait a few days, though, as I am presently limited by a very slow and restricted Internet connection via satellite. But will lubuntu install as simply another shell replacing unity or will it replace ubuntu all together? Will the software that I have installed under ubuntu still be accessible in lubuntu? And can I return to ubuntu if required? Regarding the actual question of swap: When I run gparted, it shows me that there is one ntfs partition of 100 GBytes from where it boots and the before mentioned ext4 partition of 28.6 GBytes is not mentioned. Could it be that my ubuntu installation resides inside this 100 GBytes ntfs partiotion? And if so, can I take a bite of this for my swap partition? Realising that gparted is shown in Danish, I hope that you can make out what I mean. System monitoring shows below details: Once again I sincerely hope that you can help. Thank you.

    Read the article

  • E-Business Suite Sessions at Sangam 2013 in Hyderabad

    - by Sara Woodhull
    The Sangam 2013 conference, sponsored jointly by the All-India Oracle Users' Group (AIOUG) and India Oracle Applucations Users Group (IOAUG), will be in Hyderabad, India on November 8-9, 2013.  This year, the E-Business Suite Applications Technology Group (ATG) will offer two speaker sessions and a walk-in usability test of upcoming EBS user interface features.  It's only about two weeks away, so make your plans to attend if you are in India. Sessions Oracle E-Business Suite Technology: Latest Features and Roadmap Veshaal Singh, Senior Director, ATG Development Friday, Nov. 9, 11:00-12:00 This Oracle development session provides an overview of Oracle's product strategy for Oracle E-Business Suite technology, the capabilities and associated business benefits of recent releases, and a review of capabilities on the product roadmap. This is the cornerstone session for Oracle E-Business Suite technology. Come hear about the latest new usability enhancements of the user interface; systems administration and configuration management tools; security-related updates; and tools and options for extending, customizing, and integrating Oracle E-Business Suite with other applications. Integration Options for Oracle E-Business Suite Rekha Ayothi, Lead Product Manager, ATG Friday, Nov. 9, 2:00-3:00 In this Oracle development session, you will get an understanding of how, when and where you can leverage Oracle's integration technologies to connect end-to-end business processes across your enterprise, including your Oracle Applications portfolio. This session offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other options for integrating Oracle E-Business Suite with other applications. Usability Testing There will be multiple opportunities to participate in usability testing at Sangam '13.  The User Experience team is running a one-on-one usability study that requires advance registration.  In addition, we will be hosting a special walk-in usability lab to get feedback for new Oracle E-Business Suite OA Framework features.  The walk-in lab is a shorter usability experience that does not require any pre-registration.  In both cases, Oracle wants your feedback!  Even if you only have a few minutes, come by the User Experience Lab, meet the team, and try the walk-in lab.

    Read the article

  • SQL SERVER – A Puzzle Part 3 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    Before continuing this blog post – please read the two part of the SEQUENCE Puzzle here A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value and A Puzzle Part 2 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value Where we played a simple guessing game about predicting next value. The answers the of puzzle is shared on the blog posts as a comment. Now here is the next puzzle based on yesterday’s puzzle. I recently shared the puzzle of the blog post on local user group and it was appreciated by attendees. First execute the script which I have written here. Today’s script is bit different than yesterday’s script as well it will require you to do some service related activities. I suggest you try this on your personal computer’s test environment when no one is working on it. Do not attempt this on production server as it will for sure get you in trouble. The purpose to learn how sequence behave during the unexpected shutdowns and services restarts. Now guess what will be the next value as requested in the query. USE AdventureWorks2012 GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS BIGINT START WITH 1 INCREMENT BY 1 MINVALUE 1 MAXVALUE 500 CYCLE CACHE 100; GO -- Following will return 1 SELECT next value FOR dbo.SequenceID; ------------------------------------- -- simulate server crash by restarting service -- do not attempt this on production or any server in use ------------------------------------ -- Following will return ??? SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Once the server is restarted what will be the next value for SequenceID. We can learn interesting trivia’s about this new feature of SQL Server using this puzzle. Hint: Pay special attention to the difference between new number and earlier number. Can you see the same number in the definition of the CREATE SEQUENCE? Bonus Question: How to avoid the behavior demonstrated in above mentioned query. Does it have any effect of performance? I suggest you try to attempt to answer this question without running this code in SQL Server 2012. You can restart SQL Server using command prompt as well. I will follow up of the answer in comments below. Recently my friend Vinod Kumar wrote excellent blog post on SQL Server 2012: Using SEQUENCE, you can head over there for learning sequence in details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Extreme Performance and Scale Delivered by SOA on Oracle Exalogic

    - by J Swaroop
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Demands to incorporate internet-scale applications, data, and social media traffic with existing IT infrastructure require extreme availability, reliability, and scalability. In this session on industrial-strength SOA, learn how Oracle Exalogic and Oracle Exadata engineered systems address these requirements. Topics covered: (1) how SOA and BPM benefit from “hardware and software engineered for each other,” (2) how Oracle Exadata provides the data tier with unparalleled scalability and performance for SOA and BPM running on Oracle Exalogic (3) customer case studies (4) best practices and topology guidelines (5) information on tools that help operate, manage, provision, and deploy—to help reduce overall TCO. Extreme engineering at its best! Session details: 10/2/12 (Tuesday) 11:45 AM - Moscone South -308

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Best Platform/Engine for turn based Client/Server Android game

    - by Paradine
    I'm currently designing a turn based game for tablets. Initially for Android with porting to iOS later considered in design. I'm having trouble narrowing down the available technologies to even know where to spend my research time. I am hoping that if I explain what I am trying to achieve someone may be able to suggest a platform and/or engine. I've looked into some of the open source Engines ( http://www.cuteandroid.com/ten-open-source-android-2d-or-3d-game-engine-for-android-developers ) and some appear to handle much of what I might require - although with a higher focus on graphics than i need. Mages looks interesting although development appears to have ceased. If I could somehow leverage GoogleApps that would be excellent. Here is what I am trying to achieve: PvP turn based strategy game over internet - minimal animation and bandwidth required Players match up online using MetaGame system MatchID created on Resolution Server and Game starts Clients have 30 second countdown to select MoveString Clients sends small secure timestamped and MatchIDed MoveString to Resolution server Resolution server looks up Move String for each player, Resolves and Updates Players status in MatchID on Server Resolution server updates Client Views Repeat until victory conditions met - MatchID Closed, Rewards earned in MetaGame There will also need to be a full social and account system and metagame backend - but this could be running on separate system(s) Tablet in Offline mode would be catalog browsing and perhaps single player AI - bum I'm focusing on the Resolution Server at this point I'm not even certain if I would be looking at an Android App or a WebApp at this stage! I want a custom GUI so I guess an app - but maybe as I have little animation a WebApp might also work. Probably some combination of both. There will be very small overhead in data between client server - essentially a small text string every 30 seconds sent to the Resolution server which looks up the Effect and applies it to the Opponents string and determines some results to apply to the match. The client view is updated minimally with the results (only 5 in game Integers tracked) - perhaps triggering small animations/popups on the client to show the end result. e.g Explosion. If you have suggestions for a good technology or platform to best achieving the Resolution Server I'd love to hear. Also if you have experience with open source Engines - and could narrow down which (if any ) might be most suitable that would be a big help. Thanks in advance

    Read the article

  • What are the alternatives to "overriding a method" when using composition instead of inheritance?

    - by Sebastien Diot
    If we should favor composition over inheritance, the data part of it is clear, at least for me. What I don't have a clear solution to is how overwriting methods, or simply implementing them if they are defined in a pure virtual form, should be implemented. An obvious way is to wrap the instance representing the base-class into the instance representing the sub-class. But the major downsides of this are that if you have say 10 methods, and you want to override a single one, you still have to delegate every other methods anyway. And if there were several layers of inheritance, you have now several layers of wrapping, which becomes less and less efficient. Also, this only solve the problem of the object "client"; when another object calls the top wrapper, things happen like in inheritance. But when a method of the deepest instance, the base class, calls it's own methods that have been wrapped and modified, the wrapping has no effect: the call is performed by it's own method, instead of by the highest wrapper. One extreme alternative that would solve those problems would be to have one instance per method. You only wrap methods that you want to overwrite, so there is no pointless delegation. But now you end up with an incredible amount of classes and object instance, which will have a negative effect on memory usage, and this will require a lot more coding too. So, are there alternatives (preferably alternatives that can be used in Java), that: Do not result in many levels of pointless delegation without any changes. Make sure that not only the client of an object, but also all the code of the object itself, is aware of which implementation of method should be called. Does not result in an explosion of classes and instances. Ideally puts the extra memory overhead that is required at the "class"/"particular composition" level (static if you will), rather than having every object pay the memory overhead of composition. My feeling tells me that the instance representing the base class should be at the "top" of the stack/layers so it receives calls directly, and can process them directly too if they are not overwritten. But I don't know how to do it that way.

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Why is there nobody talking about an alternative to HTML & CSS? [closed]

    - by Nic
    HTML is such an old and cumbersome language, which was intended just to markup text. Today it's very rare to see a static HTML website, or a site with only text or a very simple layout. As a web developer I find it inconvenient to use HTML & CSS, very repetitive and cumbersome. I think that for a lot of website it could be simplified a lot. Tim Berners-Lee (W3) wrote a document named "The World Wide Web: Past, Present and Future" in August 1996 ... though HTML will be considered part of the established infrastructure (rather than an exciting new toy), there will always be new formats coming along, and it may be that a more powerful and perhaps a more consistent set of formats will eventually displace HTML. So, more than 15 years later, HTML is still here and it's here to stay. Why? Why searching for xml alternatives brings so much relevant result, but searching for html alternatives brings almost none relevant results? Answers like "it's too hard to change a standard" aren't answering the question since a lot of new standards emerged since the initiation of the web. I'm also not searching for answers that suggest using tools to simplify the process or formats that anyhow depends on HTML or CSS, technologies that currently require a plugin and not even trying to become an open standards (like Flash) aren't an answer neither. BTW, here are 2 articles written more than two years ago as food for thought, it might help with writing a better answers. "HTML, CSS, and Web Development Practices: Past, Present, and Future" describing a very related problem, by Jens O. Meiert. "A Brief History of HTML" by Scott Reynen, Here is a quote from the end: So now you can answer questions about HTML5 without even looking at the draft, which is handy, because the draft is 400+ pages long. Why is there a new tag in HTML5? Because some browser vendor (maybe the one that also owns a large video site) wanted it. Why are there so many scriptable interface elements in HTML5? Because some browser vendor (maybe the one selling phones without Flash support) wants them. Why is there no support for RDFa in HTML5? Apparently no browser vendor wanted it. Is that the future?

    Read the article

  • Thank you Geeks With Blogs for letting me join your community!

    - by GreeNTUG
    First, a link to the blog I can no longer edit because Office Live blew away my digital identity and so I can no longer log into it (the source of a loooong blog about protecting your digital identity sometime when I have more time and after it has played out to the end) http://greentug.spaces.live.com/ The following are the communities I participate in: Green & Sustainability.  I run a virtual user group on Green and Sustainability as it relates to developers and software architects.  It was located at greentug.groups.live.com, and we will need to find a new digital location for it, because I am locked out of that site as well. BizSpark Tampa Bay:  I run a BizSpark group for Microsoft technologists (meetup.com, search for BizSpark Tampa Bay) and speak at Code Camps about "No Better Time to Start Your Own Tech Business".  The meetup group facilitates a balanced presentation that is respectful to anyone wanting to start their own business, whether part-time or full-time, whether micro (just you), sustainable (grow to 2-25-ish, self-funded), high growth (get venture capital or other funding, grow it, sell it within 5 years, do it again), or hybrid (the new model going forward).  It is an "action" group, with assignments and homework if you want to get the most out of it.   At the end of a year you will either have your business on the path to where you want it to be, or you will know the steps you need to do to get it there. Women in Technology Have been participating in the Women in Technology community since 2008, my main interests in this area are mentoring women in the workplace to have them believe they can become geeks and double their income, and to mentor them with respect to starting and running their own business. Access 2010/SharePoint 2010.  This is a game-changer with respect to the Access community (the ap both devs and IT Pros love to hate, the other a-word that's not a fruit).  I conducted Lunch n Learns and Brunch n Learns around this topic before the Office 2010/SharePoint 2010 launch, and spoke on the topic at SharePoint Saturday Tampa in Nov 2009. Interested in learning more about: Using Silverlight HD Streaming out in the non-technical world (horses and equestrian sport).  Migrating to Access Web Services and VB .Net from VBA (see the Access 2010/SharePoint 2010 interest above) Windows Phone 7!  Exciting opportunities both for Green and Sustainability and for my "day job" of Environmental, Health & Safety (EHS). My day job is Environmental, Health & Safetey (EHS) consulting and software solutions, where that interfaces with the developer world is with respect to opportunities around Green and Sustainability, The SmartGrid and Juval Lowy's EnergyNet, both of which will require a lot of technology and software to make them work, The new Microsoft Partner competency for "Digital Home", and The Y2K kind of deadline around how managing chemicals in ERP systems is changing because of Global Harmonization, which hits the EU with a hard deadline on 11/30/10 (yes, this year), and hits the USA about 15 months later. Hope you enjoy my contributions to the digital geek community, and feel free to email me, [email protected] (the email leftover after my digital identity was blown away), and [email protected] (this one could go away at some future point) Best, Kathy Malone

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Elastic PaaS with WebLogic and OpenStack, part I

    - by Jernej Kaše
    In my previous blog I described the steps to get OpenStack on Solaris up and running. Now we'll explore how WebLogic and OpenStack can work together to deliver truly elastic Middleware Platform as a Service. Middleware / Platform as a Service goals First, let's define what PaaS should be : PaaS offerings facilitate the deployment of applications without the complexity of managing the underlying hardware and software and provisioning hosting capabilities. To break it down: - PaaS provides a complete platform for hosting solutions (Java EE, SOA, BPM, ...) - Infrastructure provisioning (virtual machine, OS, platform) and managing is hidden from the PaaS user [administrator or developer] - Additionally, PaaS could / should define target SLAs, and the platform should ensure the SLAs are meet automatically. PaaS use case To make it more tangible, we have an IT Administrator who has the requirement to deploy a Java EE enterprise application. The application is used by external users who need to submit reports by the end of each month. As a result, the number of concurrent users will fluctuate, with expected huge spikes around the end of each month. The SLA agreed by the management is that no more than 100 requests should be waiting to be processes at any given time. In addition, the IT admin has no more than 3 days to have the platform and the application operational. The Challenges Some of the challenges the IT Administrator is facing are: - how are we going to ensure the processing power? - how are we going to provision the (virtual) machines, Java EE platform and deploy the application? - how are we going to monitor the SLA? - how are we going to react to SLA, and increase capacity?  The Ideal Solution Ideally, the whole process should be automated, "set it and forget" and require no human interaction: - The vendor packages the solution as deployable image(s) - The images are deployed to the IaaS - From there, automated processes take care of SLA  Solution Architecture with WebLogic 12c, Dynamic Clusters, OpenStack & Solaris OracleSolaris provides OS and virtualisation through Solaris Zones OpenStack is a part of Solaris 11.2 and provides Cloud Management (console and API) WebLogic 12c with Dynamic Clusters provides the Platform Trafic Manager provides load balancing On top of out that, we are going to implement a small control script - Cloud Manager - which is going to monitor SLA through WebLogic Diagnostic Framework. In case there are more than 100 pending requests, the script will: - provision a new virtual machine based on image which is configured for the WebLogic domain - add the machine to WebLogic domain - Increase the number of servers in dynamic cluster - Start the newly provisioned server  Stay tuned for part II The hole solution with working demo will be presented in one of our Partner WebCasts in June, exact date TBA. Jernej Kaše is a Fusion Middleware Specialist working closely with Oracle Partners in the ECEMEA region to grow their business by leveraging Oracle technology.

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Play Your Position Until the Play Breaks Down&hellip;then Do Whatever it Takes.

    - by AjarnMark
    If I didn’t know better, I would think that K. Brian Kelley (blog | twitter) has been listening in on conversations with my boss. In his recent blog post Successful Teams: Knowing When to Step Out of Your Role, Brian describes quite clearly a philosophy that my boss has been trying to get across to everyone in the department.  We have been using sports analogies, like how important it is to play your position, until the play breaks down (such as a fumble) and then do whatever it takes it to cover each other / recover the ball / win.  While we like having very skilled people who could do a lot of different tasks, it is important that you first do your assigned tasks, and only once those are complete, or failure of the larger mission is probable, do you consider walking away from them to help someone else with their responsibilities. The thing that you cannot afford, especially on a lean team, is the really nice guy who is always trying to help out other people, but in doing so, is never quite getting his own responsibilities taken care of.  Yes, if the Running Back drops the football, you want any member of the team in the vicinity to jump on it, whether that is the leading blocker or the Quarterback.  But until the fumble happens, you want the leading blocker to focus on doing his job, and block for the Running Back.  If the blocker is doing any other job than his primary responsibility, you’re probably going to lose. This sounds logical enough, but it is really easy to go astray with the best of intentions.  This is especially true on a small, tight-knit team, where it is really easy to get sucked into someone else’s task or problem, doubly so if you think you can do it better or faster than them.  Now you are really setting yourself up for failure.  The right thing is to let the other person do the job, even if it seems less efficient in the short-run, so that you can focus on the tasks which require your expertise.  Don’t break formation…don’t abandon your assignment, until it is clear that mission failure is imminent, and even then, as Brian writes, it should be with the agreement of the mission leader. Thanks, Brian, for putting it so well.  This has been distributed throughout our department.

    Read the article

  • Windows Desktop Virtualization Gets Easier

    - by andrewbrust
    This past Thursday, Microsoft announced that Windows (7) Virtual PC (WVPC) and its XP Mode feature would no longer require hardware assisted virtualization (HAV).  That means any PC running Windows 7 Pro, or higher, can now run this software.  And that’s a great thing because, as I noted in a post almost five month ago, determining whether a given PC you might be planning to buy actually offers HAV can be extremely difficult.  That meant even dedicated, sophisticated PC users, with a budget for new hardware, might be blocked from using this technology.  And that was just plain silly. One of the features offered by WVPC, and utilized heavily by XP Mode, is the concept of virtual applications: apps within a guest VM that can actually run within the host’s desktop environment.  I find this feature so powerful that my February Redmond Review column entertained the notion of a future version of Windows that runs all applications in this manner. The elimination of the HAV requirement for XP Mode and WVPC was just one of many virtualization-related announcements Microsoft made on Thursday.  And, interestingly, most of the others were also desktop-related, rather than server-related.  This is a welcome change from the multi-year period in which Microsoft enhanced its server virtualization lineup (in Hyper-V) and let the desktop platform fester.  Microsoft now seems to understand desktop virtualization is in high-demand and strengthens the Windows franchise.  As I explained in the column, even cloud computing can have a desktop spin if desktop virtualization is part of the equation. One company that knows this well is Citrix, and a closer alliance between Microsoft and Citrix was one of the many announcements from Thursday.  In fact, there’s a whole Web site dedicated to the alliance at http://www.citrixandmicrosoft.com/. I’d love to see virtual applications and entire virtual desktops offered as Azure-branded services.  This could allow me to run, for example, the full Office client on a variety of desktops I might use, and for large organizations it could easily reduce the expense, burden and duration of the deployment cycle for new versions of Office.  Business Intelligence providers, including my own firm, twentysix New York, would find great relief in enabling their customers to run the newest version of Excel, with the latest BI capabilities, instead of having to wait the requisite two to three years it takes for many Fortune 500 customers to upgrade. Microsoft should do more, and faster.  WVPC still does not support 64-bit guest images, even on 64-bit hosts.  That needs to be fixed.  File access from the guest to the host needs to be improved (right now, it’s done through Terminal Services/Remote Desktop file sharing, and it’s slow) and VM load times need to be significantly reduced before virtualized apps can become the norm.  (I suppose the advance of solid state drive technology will help there.) I do think these improvements will come, because Microsoft is focused on the virtual desktop now.  And that’s a smart focus to have.

    Read the article

  • Tuxedo Load Balancing

    - by Todd Little
    A question I often receive is how does Tuxedo perform load balancing.  This is often asked by customers that see an imbalance in the number of requests handled by servers offering a specific service. First of all let me say that Tuxedo really does load or request optimization instead of load balancing.  What I mean by that is that Tuxedo doesn't attempt to ensure that all servers offering a specific service get the same number of requests, but instead attempts to ensure that requests are processed in the least amount of time.   Simple round robin "load balancing" can be employed to ensure that all servers for a particular service are given the same number of requests.  But the question I ask is, "to what benefit"?  Instead Tuxedo scans the queues (which may or may not correspond to servers based upon SSSQ - Single Server Single Queue or MSSQ - Multiple Server Single Queue) to determine on which queue a request should be placed.  The scan is always performed in the same order and during the scan if a queue is empty the request is immediately placed on that queue and request routing is done.  However, should all the queues be busy, meaning that requests are currently being processed, Tuxedo chooses the queue with the least amount of "work" queued to it where work is the sum of all the requests queued weighted by their "load" value as defined in the UBBCONFIG file.  What this means is that under light loads, only the first few queues (servers) process all the requests as an empty queue is often found before reaching the end of the scan.  Thus the first few servers in the queue handle most of the requests.  While this sounds non-optimal, in fact it capitalizes on the underlying operating systems and hardware behavior to produce the best possible performance.  Round Robin scheduling would spread the requests across all the available servers and thus require all of them to be in memory, and likely not share much in the way of hardware or memory caches.  Tuxedo's system maximizes the various caches and thus optimizes overall performance.  Hopefully this makes sense and now explains why you may see a few servers handling most of the requests.  Under heavy load, meaning enough load to keep all servers that can handle a request busy, you should see a relatively equal number of requests processed.  Next post I'll try and cover how this applies to servers in a clustered (MP) environment because the load balancing there is a little more complicated. Regards,Todd LittleOracle Tuxedo Chief Architect

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >