Search Results

Search found 17913 results on 717 pages for 'old school rules'.

Page 603/717 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Wireless Problem on Acer Aspire 5610z

    - by Ugur Can Yalaki
    I installed ubuntu 12.04 on my machine, but I can't get wireless connection to work. My computer is Acer Aspire 5610z. I found that some other people that have same computer, face the same problem. Here is some information about it: ****** info trace ****** * uname -a * Linux ucy-Aspire-5610Z 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:22:28 UTC 2013 i686 i686 i386 GNU/Linux * lsb_release * Distributor ID: Ubuntu Description: Ubuntu 12.04.3 LTS Release: 12.04 Codename: precise * lspci * 05:00.0 Network controller [0280]: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 01) Subsystem: AMBIT Microsystem Corp. Device [1468:0422] Kernel driver in use: b43-pci-bridge 06:01.0 Ethernet controller [0200]: Broadcom Corporation BCM4401-B0 100Base-TX [14e4:170c] (rev 02) Subsystem: Acer Incorporated [ALI] Device [1025:0090] Kernel driver in use: b44 * lsusb * Bus 001 Device 004: ID 04e8:6863 Samsung Electronics Co., Ltd Bus 001 Device 002: ID 5986:0100 Acer, Inc Orbicam Bus 002 Device 002: ID 046d:c52f Logitech, Inc. Wireless Mouse M305 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub * PCMCIA Card Info * PRODID_1="" PRODID_2="" PRODID_3="" PRODID_4="" MANFID=0000,0000 FUNCID=255 * iwconfig * * rfkill * 0: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no * lsmod * ssb_hcd 12781 0 ssb 51554 2 ssb_hcd,b44 * nm-tool * NetworkManager Tool State: connected (global) Device: usb0 [Wired connection 2] ------------------------------------------- Type: Wired Driver: rndis_host State: connected Default: yes HW Address: Capabilities: Carrier Detect: yes Wired Properties Carrier: on IPv4 Settings: Address: 192.168.42.7 Prefix: 24 (255.255.255.0) Gateway: 192.168.42.129 DNS: 192.168.42.129 IPv6 Settings: Address: ::a05d:a1ff:fea4:1738 Prefix: 64 Gateway: fe80::504d:76ff:fe86:db04 Address: fe80::a05d:a1ff:fea4:1738 Prefix: 64 Gateway: fe80::504d:76ff:fe86:db04 DNS: fe80::504d:76ff:fe86:db04 Device: eth2 ----------------------------------------------------------------- Type: Wired Driver: b44 State: unavailable Default: no HW Address: Capabilities: Carrier Detect: yes Wired Properties Carrier: off * NetworkManager.state * [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true * NetworkManager.conf * [main] plugins=ifupdown,keyfile dns=dnsmasq [ifupdown] managed=false * interfaces * auto lo iface lo inet loopback * iwlist * * resolv.conf * nameserver 127.0.0.1 * blacklist * [/etc/modprobe.d/blacklist-ath_pci.conf] blacklist ath_pci [/etc/modprobe.d/blacklist-bcm43.conf] blacklist b43 blacklist b43legacy blacklist ssb blacklist bcm43xx blacklist brcm80211 blacklist brcmfmac blacklist brcmsmac blacklist bcma [/etc/modprobe.d/blacklist.conf] blacklist evbug blacklist usbmouse blacklist usbkbd blacklist eepro100 blacklist de4x5 blacklist eth1394 blacklist snd_intel8x0m blacklist snd_aw2 blacklist i2c_i801 blacklist prism54 blacklist bcm43xx blacklist garmin_gps blacklist asus_acpi blacklist snd_pcsp blacklist pcspkr blacklist amd76x_edac * modinfo * filename: /lib/modules/3.8.0-32-generic/kernel/drivers/usb/host/ssb-hcd.ko license: GPL description: Common USB driver for SSB Bus author: Hauke Mehrtens srcversion: E127A51EDC8F44D2C2A8F15 alias: ssb:v4243id0819rev* alias: ssb:v4243id0817rev* alias: ssb:v4243id0808rev* depends: ssb intree: Y vermagic: 3.8.0-32-generic SMP mod_unload modversions 686 filename: /lib/modules/3.8.0-32-generic/kernel/drivers/ssb/ssb.ko license: GPL description: Sonics Silicon Backplane driver srcversion: 14621F6EC014F731244437C alias: pci:v000014E4d00004350sv*sd*bc*sc*i* alias: pci:v000014E4d0000432Csv*sd*bc*sc*i* alias: pci:v000014E4d0000432Bsv*sd*bc*sc*i* alias: pci:v000014E4d00004329sv*sd*bc*sc*i* alias: pci:v000014E4d00004328sv*sd*bc*sc*i* alias: pci:v000014E4d00004325sv*sd*bc*sc*i* alias: pci:v000014E4d00004324sv*sd*bc*sc*i* alias: pci:v000014E4d0000A8D6sv*sd*bc*sc*i* alias: pci:v000014E4d00004322sv*sd*bc*sc*i* alias: pci:v000014E4d00004321sv*sd*bc*sc*i* alias: pci:v000014E4d00004320sv*sd*bc*sc*i* alias: pci:v000014E4d00004319sv*sd*bc*sc*i* alias: pci:v000014A4d00004318sv*sd*bc*sc*i* alias: pci:v000014E4d00004318sv*sd*bc*sc*i* alias: pci:v000014E4d00004315sv*sd*bc*sc*i* alias: pci:v000014E4d00004312sv*sd*bc*sc*i* alias: pci:v000014E4d00004311sv*sd*bc*sc*i* alias: pci:v000014E4d00004307sv*sd*bc*sc*i* alias: pci:v000014E4d00004306sv*sd*bc*sc*i* alias: pci:v000014E4d00004301sv*sd*bc*sc*i* depends: intree: Y vermagic: 3.8.0-32-generic SMP mod_unload modversions 686 * udev rules * PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb1:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb2:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb3:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" * dmesg * [ 2.385241] ssb: Found chip with id 0x4311, rev 0x01 and package 0x00 [ 2.385256] ssb: Core 0 found: ChipCommon (cc 0x800, rev 0x11, vendor 0x4243) [ 2.385266] ssb: Core 1 found: IEEE 802.11 (cc 0x812, rev 0x0A, vendor 0x4243) [ 2.385276] ssb: Core 2 found: USB 1.1 Host (cc 0x817, rev 0x03, vendor 0x4243) [ 2.385286] ssb: Core 3 found: PCI-E (cc 0x820, rev 0x01, vendor 0x4243) [ 2.448147] ssb: Sonics Silicon Backplane found on PCI device 0000:05:00.0 [ 2.468112] ssb: Found chip with id 0x4401, rev 0x02 and package 0x00 [ 2.468124] ssb: Core 0 found: Fast Ethernet (cc 0x806, rev 0x07, vendor 0x4243) [ 2.468132] ssb: Core 1 found: V90 (cc 0x807, rev 0x03, vendor 0x4243) [ 2.468140] ssb: Core 2 found: PCI (cc 0x804, rev 0x0A, vendor 0x4243) [ 2.508230] ssb: Sonics Silicon Backplane found on PCI device 0000:06:01.0 [ 2.528620] b44 ssb1:0 eth0: Broadcom 44xx/47xx 10/100 PCI ethernet driver ******** done ******** Thank you already for your help.

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Five Things Learned at the BSR Conference in San Francisco on Nov 2nd-4th

    - by Evelyn Neumayr
    The BSR Conference 2011—“Redefining Leadership”—held from Nov 2nd to Nov 4th in San Francisco, with Oracle as one of the main sponsors, saw senior business executives, civil society representatives, and other experts from around the world gathering to share strategies and insights on the future of sustainability. The general conference sessions kicked off on November 2nd with a plenary address by former U.S. Vice President Al Gore. Other sessions were presented by CEOs of the caliber of Carl Bass (Autodesk), Brian Dunn (Best Buy), Carlos Brito (Anheuser-Busch InBev) and Ofra Strauss (Strauss Group). Here are five key highlights from the conference: 1.      The main leadership challenge is integrating sustainability into core business functions and overcoming short-termism. The “BSR GlobeScan State of Sustainable Business Poll 2011” - a survey of nearly 500 business leaders from 300 member companies - shows that 84% of respondents are optimistic that global businesses will embrace CSR/sustainability as part of their core strategies and operations in the next five years but consider integrating sustainability into their core business functions the key challenge. It is still difficult for many companies that are committed to the sustainability agenda to find investors that understand the long-term implications and as Al Gore said “Many companies are given the signal by the investors that it is the short term results that matter and that is a terribly debilitating force in the market.” 2.      Companies are required to address increasing compliance requirements and transparency in their supply chain, especially in relation with conflict minerals legislation and water management. The Dodd-Frank legislation, OECD guidelines, and the upcoming Securities and Exchange Commission (SEC) rules require companies to monitor upstream the sourcing of tin, tantalum, tungsten, and gold, but given the complexity of this issue companies need to collaborate and partner with peer companies in their industry as well as in other industries to understand how to address conflict minerals in their supply chains. The Institute of Public and Environmental Affairs’ (IPE) China Water Pollution Map enables the public to access thousands of environmental quality, discharge, and infraction records released by various government agencies. Empowered with this information, the public has the opportunity to place greater pressure on polluting companies to comply with environmental standards and create solutions to improve their performance. 3.      A new standard for reporting on supply chain greenhouse gas emissions is available. The New “Scope 3” Supply Chain Greenhouse Gas Inventory Standard, released on October 4th 2011, is the only international greenhouse gas emissions standard that accounts for the full lifecycle of a company’s products. It provides a framework for companies to account for indirect emissions outside of energy use, such as transportation, manufacturing, and distribution, and it incorporates both upstream and downstream impacts of a product. With key investors now listing supplier vulnerability to rising energy prices and disruptions of service as a key concern, greenhouse gas (GHG) management isn’t just for leading companies but a necessity for any business. 4.      Environmental, social, and corporate governance (ESG) reporting is becoming increasingly important to investors and other stakeholders. While European investors have traditionally driven the ESG agenda, U.S. investors are increasingly including ESG data in their analyses. This trend will likely increase as stakeholders continue to demand that an ESG lens be applied to their investments. Investors are increasingly looking to partner on sustainability, as they see the benefits of ESG providing significant returns on investment. 5.      Software companies are offering an increasing variety of solutions to help drive changes and measure performance internally, in supply chains, and across peer companies. The significant challenge is how to integrate different software systems to facilitate decision-making based on a holistic understanding of trade-offs. Jon Chorley, Chief Sustainability Officer and Vice President, Supply Chain Management Product Strategy at Oracle was a panelist in the “Trends in Sustainability Software” session and commented that, “How we think about our business decisions really comes down to how we think about cost. And as long as we don’t assign a cost to things that have an environmental impact or social impact, then we make decisions based on incomplete information. If we could include that in the process that determines ‘Is this product profitable? we would then have a much better decision.” For more information on BSR visit www.brs.org. You can also view highlights of the plenary session at http://www.bsr.org/en/bsr-conference/session-summaries/2011. Oracle is proud to be a sponsor of this BSR conference. By Elena Avesani, Principal Product Strategy Manager, Oracle          

    Read the article

  • The Birth of a Method - Where did OUM come from?

    - by user702549
    It seemed fitting to start this blog entry with the OUM vision statement. The vision for the Oracle® Unified Method (OUM) is to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product.  Well, it’s that time of year again; we just finished testing and packaging OUM 5.6.  It will be released for general availability to qualifying customers and partners this month.  Because of this, I’ve been reflecting back on how the birth of Oracle’s Unified method - OUM came about. As the Release Director of OUM, I’ve been honored to package every method release.  No, maybe you’d say it’s not so special.  Of course, anyone can use packaging software to create an .exe file.  But to me, it is pretty special, because so many people work together to make each release come about.  The rich content that results is what makes OUM’s history worth talking about.   To me, professionally speaking, working on OUM, well it’s been “a labor of love”.  My youngest child was just 8 years old when OUM was born, and she’s now in High School!  Watching her grow and change has been fascinating, if you ask her, she’s grown up hearing about OUM.  My son would often walk into my home office and ask “How is OUM today, Mom?”  I am one of many people that take care of OUM, and have watched the method “mature” over these last 6 years.  Maybe that makes me a "Method Mom" (someone in one of my classes last year actually said this outloud) but there are so many others who collaborate and care about OUM Development. I’ve thought about writing this blog entry for a long time just to reflect on how far the Method has come. Each release, as I prepare the OUM Contributors list, I see how many people’s experience and ideas it has taken to create this wealth of knowledge, process and task guidance as well as templates and examples.  If you’re wondering how many people, just go into OUM select the resources button on the top of most pages of the method, and on that resources page click the ABOUT link. So now back to my nostalgic moment as I finished release 5.6 packaging.  I reflected back, on all the things that happened that cause OUM to become not just a dream but to actually come to fruition.  Here are some key conditions that make it possible for each release of the method: A vision to have one method instead of many methods, thereby focusing on deeper, richer content People within Oracle’s consulting Organization  willing to contribute to OUM providing Subject Matter Experts who are willing to write down and share what they know. Oracle’s continued acquisition of software companies, the need to assimilate high quality existing materials from these companies The need to bring together people from very different backgrounds and provide a common language to support Oracle Product implementations that often involve multiple product families What came first, and then what was the strategy? Initially OUM 4.0 was based on Oracle’s J2EE Custom Development Method (JCDM), it was a good “backbone”  (work breakdown structure) it was Unified Process based, and had good content around UML as well as custom software development.  But it needed to be extended in order to achieve the OUM Vision. What happened after that was to take in the “best of the best”, the legacy and acquired methods were scheduled for assimilation into OUM, one release after another.  We incrementally built OUM.  We didn’t want to lose any of the expertise that was reflected in AIM (Oracle’s legacy Application Implementation Method), Compass (People Soft’s Application implementation method) and so many more. When was OUM born? OUM 4.1 published April 30, 2006.  This release allowed Oracles Advanced Technology groups to begin the very first implementations of Fusion Middleware.  In the early days of the Method we would prepare several releases a year.  Our iterative release development cycle began and continues to be refined with each Method release.  Now we typically see one major release each year. The OUM release development cycle is not unlike many Oracle Implementation projects in that we need to gather requirements, prioritize, prepare the content, test package and then go production.  Typically we develop an OUM release MoSCoW (must have, should have, could have, and won’t have) right after the prior release goes out.   These are the high level requirements.  We break the timeframe into increments, frequent checkpoints that help us assess the content and progress is measured through frequent checkpoints.  We work as a team to prioritize what should be done in each increment. Yes, the team provides the estimates for what can be done within a particular increment.  We sometimes have Method Development workshops (physically or virtually) to accelerate content development on a particular subject area, that is where the best content results. As the written content nears the final stages, it goes through edit and evaluation through peer reviews, and then moves into the release staging environment.  Then content freeze and testing of the method pack take place.  This iterative cycle is run using the OUM artifacts that make sense “fit for purpose”, project plans, MoSCoW lists, Test plans are just a few of the OUM work products we use on a Method Release project. In 2007 OUM 4.3, 4.4 and 4.5 were published.  With the release of 4.5 our Custom BI Method (Data Warehouse Method FastTrack) was assimilated into OUM.  These early releases helped us align Oracle’s Unified method with other industry standards Then in 2008 we made significant changes to the OUM “Backbone” to support Applications Implementation projects with that went to the OUM 5.0 release.  Now things started to get really interesting.  Next we had some major developments in the Envision focus area in the area of Enterprise Architecture.  We acquired some really great content from the former BEA, Liquid Enterprise Method (LEM) along with some SMEs who were willing to work at bringing this content into OUM.  The Service Oriented Architecture content in OUM is extensive and can help support the successful implementation of Fusion Middleware, as well as Fusion Applications. Of course we’ve developed a wealth of OUM training materials that work also helps to improve the method content.  It is one thing to write “how to”, and quite another to be able to teach people how to use the materials to improve the success of their projects.  I’ve learned so much by teaching people how to use OUM. What's next? So here toward the end of 2012, what’s in store in OUM 5.6, well, I’m sure you won’t be surprised the answer is Cloud Computing.   More details to come in the next couple of weeks!  The best part of being involved in the development of OUM is to see how many people have “adopted” OUM over these six years, Clients, Partners, and Oracle Consultants.  The content just gets better with each release.   I’d love to hear your comments on how OUM has evolved, and ideas for new content you’d like to see in the upcoming releases.

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • Oracle Fusion Middleware 11g next launch phase - what a week of product releases! Feedback from our

    - by Jürgen Kress
      Product releases: SOA Suite 11gR1 Patch Set 2 (PS2) BPM Suite 11gR1 Released Oracle JDeveloper 11g (11.1.1.3.0) (Build 5660) Oracle WebLogic Server 11gR1 (10.3.3) Oracle JRockit (4.0) Oracle Tuxedo 11gR1 (11.1.1.1.0) Enterprise Manager 11g Grid Control Release 1 (11.1.0.1.0) for Linux x86/x86-64 All Oracle Fusion Middleware 11gR1 Software Download   BPM Suite 11gR1 Released by Manoj Das Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0   SOA Suite 11gR1 Patch Set 2 (PS2) released by Demed L'Her We just released SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms) 11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full install (11gR1 PS1). That’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – but an extra step for new users who will first need to download SOA Suite 11gR1 PS1 (in addition to the PS2 patch). What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant ones: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Are you Systems Integrations and Independent Software Vendors ready to adopt and to deliver? Make sure that you become trained: Local training calendars Register for the SOA Partner Community & Webcast www.oracle.com/goto/emea/soa What is your feedback?  Who installed the software? please feel free to share your experience at http://twitter.com/soacommunity #soacommunity Technorati Tags: SOA partner community ACE Directoris SOA Suite PS2 BPM11g First feedback from our ACE Directors and key Partners:   Now, these are great times to start the journey into BPM! Hajo Normann Reuse of components across the Oracle 11G Fusion Middleware stack, BPM just is one of the components plugging into the stack and reuses all other components. Mr. Leon Smiers With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Guido Schmutz We have real BPMN 2.0, which get's executed. No more transformation from business models to executable models - just press the run button... Torsten Winterberg Oracle BPM Suite 11g brings Out-of-box integration with WebCenter Suite and Oracle ADF development framework. Andrejus Baranovskis With the release of BPM Suite 11g, Oracle has defined new standards for Business Process platforms. Geoffroy de Lamalle With User Messaging Service you can let Soa Suite 11g do all your Messaging Edwin Biemond

    Read the article

  • My First Weeks at Red Gate

    - by Jess Nickson
    Hi, my name’s Jess and early September 2012 I started working at Red Gate as a Software Engineer down in The Agency (the Publishing team). This was a bit of a shock, as I didn’t think this team would have any developers! I admit, I was a little worried when it was mentioned that my role was going to be different from normal dev. roles within the company. However, as luck would have it, I was placed within a team that was responsible for the development and maintenance of Simple-Talk and SQL Server Central (SSC). I felt rather unprepared for this role. I hadn’t used many of the technologies involved and of those that I had, I hadn’t looked at them for quite a while. I was, nevertheless, quite excited about this turn of events. As I had predicted, the role has been quite challenging so far. I expected that I would struggle to get my head round the large codebase already in place, having never used anything so much as a fraction of the size of this before. However, I was perhaps a bit naive when it came to how quickly things would move. I was required to start learning/remembering a number of different languages and technologies within time frames I would never have tried to set myself previously. Having said that, my first week was pretty easy. It was filled with meetings that were designed to get the new starters up to speed with the different departments, ideals and rules within the company. I also attended some lightning talks being presented by other employees, which were pretty useful. These occur once a fortnight and normally consist of around four speakers. In my spare time, we set up the Simple-Talk codebase on my computer and I started exploring it and worked on my first feature – redirecting requests for URLs that used incorrect casing! It was also during this time that I was given my first introduction to test-driven development (TDD) with Michael via a code kata. Although I had heard of the general ideas behind TDD, I had definitely never tried it before. Indeed, I hadn’t really done any automated testing of code before, either. The session was therefore very useful and gave me insights as to some of the coding practices used in my team. Although I now understand the importance of TDD, it still seems odd in my head and I’ve yet to master how to sensibly step up the functionality of the code a bit at a time. The second week was both easier and more difficult than the first. I was given a new project to work on, meaning I was no longer using the codebase already in place. My job was to take some designs, a WordPress theme, and some initial content and build a page that allowed users of the site to read provided resources and give feedback. This feedback could include their thoughts about the resource, the topics covered and the page design itself. Although it didn’t sound the most challenging of projects when compared to fixing bugs in our current codebase, it nevertheless provided a few sneaky problems that had me stumped. I really enjoyed working on this project as it allowed me to play around with HTML, CSS and JavaScript; all things that I like working with but rarely have a chance to use. I completed the aims for the project on time and was happy with the final outcome – though it still needs a good designer to take a look at it! I am now into my third week at Red Gate and I have temporarily been pulled off the website from week 2. I am again back to figuring out the Simple-Talk codebase. Monday provided me with the chance to learn a bunch of new things: system level testing, Selenium and Python. I was set the challenge of testing a bug fix dealing with the search bars in Simple-Talk. The exercise was pretty fun, although Mike did have to point me in the right direction when I started making the tests a bit too complex. The rest of the week looks set to be focussed on pair programming with Mike as we work together on a new feature. I look forward to the challenges that still face me and hope that I will be able to get up to speed quickly. *fingers crossed*

    Read the article

  • VB Myth - Case Insensitivity is Awesome!

    - by Damon
    I was reading Andy Brown's article 10 Reasons Why Visual Basic is Better than C# and the first claim is that VB is superior because of case insensitivity.  I think the reasons he outlines are basically as follows: Your fingers get tired finding the shift key (e.g. typing PascalCase and camelCase members) You are much more likely to make mistakes while typing names When you accidentally leave caps lock on, it really matters These three arguments culminate in the conclusion: "It doesn't matter if you disagree with everything else in this article: case-sensitivity alone is sufficient reason to ditch C#!" Righto.  I've been using Visual Basic since version 5.0, I wrote a book about ASP.NET in Visual Basic, so I want everyone to know I'm definitely not a VB.NET hater.  I had to converted to C# because it was the language of preference at the places I've worked, so I'm used to both languages.  I love me some case sensitivity.  So first, let's debunk the claims. First, your fingers do not get tired of finding the shift key unless you are writing code in notepad and compiling everything on the command line.  Visual Studio pretty much takes away the need to use the shift key at all. For the most part, any programmer worth a damn doesn't have to type more than about 3-5 characters of any variable or method name before IntelliSense kicks in to help.  VB or C#, if you are not using the tab key for autocomplete then you are typing too much anyway, regardless of whether the shift key is involved or not.  Also, you've got to be a pretty hard-core candy ass if you're complaining at the end of the day that your little fingers are hurting from hitting the shift key. Second, I cannot logically refute the fact that if there are more stringent rules about case sensitivity it will lead to more mistakes.  As such, know that you will be more prone to mistakes in C#.  However, lets talk about the magnitude of the problem.  If you are using IntelliSense then you have auto-correction built in so you probably won't have much of a problem in the first place.  If you manage to bypass IntelliSense and type something wrong you normally are immediately presented with a red-squiggly to let you know something is amiss.  Normally, a person would look at the problem, figure out what the heck went wrong, and then avoid that problem again in the future.  Granted, I have met people who seem to lack this capability, but their problem is deeper than a decision between VB.NET and C#.  So let's make sure that we're all on the same page about this problem.  If you have two teams of developers, one that uses VB.NET and one that uses C#, do not expect to see the VB.NET team drinking beer at the end of the project in festive revelry while the C# team is crying over what the hell to do because their code is riddled with case-sensitivity problems that nobody can resolve. Lastly, if you leave your caps lock key on, turn it off.  Really, what kind of ass-hat would write an entire VB.NET application ENTIRELY IN CAPS?  I happen to be a fan of case sensitivity because it encourages precision and uniformity.  The last thing I need is a code base that looks like it was ransacked by LeEt HacKors wHo Can uSe wHateVer cASe tHey wanT.  I mean really, if you saw someone write this: PuBLIc Sub MyMethod . End Sub And upon asking them why BL was upper case, they responded "Oh, I accidentally hit the shift key there.  Fortunately for me VB.NET is a case insensitive language so I saved a couple of keystrokes by leaving it in there."  Or if you saw: PUBLIC SUB ANOTHERMETHOD . END SUB And the response to why it was uppercased was "Yeah, I accidentally had caps locks on, fortunately for me VB.NET doesn't care.  Really dodged a bullet there, glad I wasn't using C#."  Would you not think that a bit ridiculous?  If you want to convince C# developers that C# sucks, go for it.  But the case insensitivity argument is crap.

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • When does "proper" programming no longer matter?

    - by Kai Qing
    I've been a full time programmer for about 8 years now. Web based mostly, ranging in weird jobs for clients. Never anything I "want" to do. So my experience is limited to what I've been contracted to do, having no real incentive to master anything in particular. So here's my scenario and ultimately what I wonder about... I've been building an android game in my spare time. It's using the libgdx library so quite a bit of the heavy lifting is done for me. I don't read much of the docs cause unless it's in tutorial format I will just not care, and ultimately most of my questions have already been asked on stackoverflow. I get along fine and my game works as expected... Suspiciously well, even. So much so that I wonder why one should bother to be "proper" when coding if the end result is ultimately the same. To be more specific, I used a hashtable because I wanted something close to an associative array. Human readable key values. In other places to achieve similar things, I use a vector. I know libgdx has vector2 and vector3 classes, but I've never used them. When I come across weird problems and search stackoverflow for help, I see a lot of people just reaming the questions that use a certain datatype when another one is technically "proper." Like using an ArrayList because it does not require defined bounds versus re-defining an int[] with new known boundaries. Or even something trivial like this: for(int i = 0; i < items.length; i ++) { // do something } I know it evaluates item.length on every iteration. I just don't care. I know items will never be more than 15 to 20 items. So why bother caring if I evaluate items.length on every iteration? So I wonder - why does everyone get all up in arms over this? Who cares if I use a less efficient datatype to get the job done? I ran some tests to see how the app performs using the lazy, get it done fast and don't look back method I just described versus the proper, follow the tutorial and use the exact data types suggested by the community. The results: Same thing. Average 45 fps. I opened every app on the phone and galaxy tab. Same deal. No difference. My game is pretty graphic intensive. It's not like it's just a simple thing. I expected it to perform kind of badly since I don't care to optimize image assets or... well, you probably get the idea. I'm making the game for fun. As a joke, really. But in doing so I'm working outside the normal scope of my job, which is to always follow the rules and do it the right way. So to say, I am without bounds here and this has caused me to wonder why I ever really care to be "proper" So I guess my question to you is this: Is there a threshold when it no longer matters to be proper? Is there a lasting, longer term consequence to the lazy, get it done and don't look back route? Is it ok to say - "so long as it gets the job done, I don't care?" Disclaimer: When I program my game, I am almost always drunk. I do it to remember why I got into this stuff to begin with because the monotony of client based web work will make you hate being a programmer. I'm having a blast and my game is not crashing, tests well, performs well, looks good on all devices so far and has no noticeable negative impact on any of my testing devices. I expected failure because I was being so drunkenly careless with my code, but to my surprise, it had no noticeable impact. I am now starting to question the need to be careful. Help me regain the ability to care! ... or explain why it's not a bad thing to not care. Secondary disclaimer: I am aware of the benefits of maintainability. For myself and others. Agreed. But it's not like someone happening across my inefficient int[] loop won't know what it does. As an experienced programmer those kinds of things are just clear on sight. I document the complex stuff for myself knowing I was drunk and will probably need a reminder. Those notes would clarify any confusion for someone who might ever gaze upon my ridiculous game - though the reality is that either I maintain it myself or it fades into time. I'm ok with that. But if it doesn't slow the device down, or crash, then crossing the t's and dotting the i's might actually require more time than it's worth.

    Read the article

  • It was a figure of speech!

    - by Ratman21
    Yesterday I posted the following as attention getter / advertisement (as well as my feelings). In the groups, (I am in) on the social networking site, LinkedIn and boy did I get responses.    I am fighting mad about (a figure of speech, really) not having a job! Look just because I am over 55 and have gray hair. It does not mean, my brain is dead or I can no longer trouble shoot a router or circuit or LAN issue. Or that I can do “IT” work at all. And I could prove this if; some one would give me at job. Come on try me for 90 days at min. wage. I know you will end up keeping me (hope fully at normal pay) around. Is any one hearing me…come on take up the challenge!     This was the responses I got.   I hear you. We just need to retrain and get our skills up to speed is all. That is what I am doing. I have not given up. Just got to stay on top of the game. Experience is on our side if we have the credentials and we are reasonable about our salaries this should not be an issue.   Already on it, going back to school and have got three certifications (CompTIA A+, Security+ and Network+. I am now studying for my CISCO CCNA certification. As to my salary, I am willing to work at very reasonable rate.   You need to re-brand yourself like a product, market and sell yourself. You need to smarten up, look and feel a million dollars, re-energize yourself, regain your confidents. Either start your own business, or re-write your CV so it stands out from the rest, get the template off the internet. Contact every recruitment agent in your town, state, country and overseas, and on the web. Apply to every job you think you could do, you may not get it but you will make a contact for your network, which may lead to a job at the end of the tunnel. Get in touch with everyone you know from past jobs. Do charity work. I maintain the IT Network, stage electrical and the Telecom equipment in my church,   Again already on it. I have email the world is seems with my resume and cover letters. So far, I have rewritten or had it rewrote, my resume and cover letters; over seven times so far. Re-energize? I never lost my energy level or my self-confidents in my work (now if could get some HR personal to see the same). I also volunteer at my church, I created and maintain the church web sit.   I share your frustration. Sucks being over 50 and looking for work. Please don't sell yourself short at min wage because the employer will think that’s your worth. Keep trying!!   I never stop trying and min wage is only for 90 days. If some one takes up the challenge. Some post asked if I am keeping up technology.   Do you keep up with the latest technology and can speak the language fluidly?   Yep to that and as to speaking it also a yep! I am a geek you know. I heard from others over the 50 year mark and younger too.   I'm with you! I keep getting told that I don't have enough experience because I just recently completed a Masters level course in Microsoft SQL Server, which gave me a project-intensive equivalent of between 2 and 3 years of experience. On top of that training, I have 19 years as an applications programmer and database administrator. I can normalize rings around experienced DBAs and churn out effective code with the best of them. But my 19 years is worthless as far as most recruiters and HR people are concerned because it is not the specific experience for which they're looking. HR AND RECRUITERS TAKE NOTE: Experience, whatever the language, translates across platforms and technology! By the way, I'm also over 55 and still have "got it"!   I never lost it and I also can work rings round younger techs.   I'm 52 and female and seem to be having the same issues. I have over 10 years experience in tech support (with a BS in CIS) and can't get hired either.   Ow, I only have an AS in computer science along with my certifications.   Keep the faith, I have been unemployed since August of 2008. I agree with you...I am willing to return to the beginning of my retail career and work myself back through the ranks, if someone will look past the grey and realize the knowledge I would bring to the table.   I also would like some one to look past the gray.   Interesting approach, volunteering to work for minimum wage for 90 days. I'm in the same situation as you, being 55 & balding w/white hair, so I know where you're coming from. I've been out of work now for a year. I'm in Michigan, where the unemployment rate is estimated to be 15% (the worst in the nation) & even though I've got 30+ years of IT experience ranging from mainframe to PC desktop support, it's difficult to even get a face-to-face interview. I had one prospective employer tell me flat out that I "didn't have the energy required for this position". Mostly I never get any feedback. All I can say is good luck & try to remain optimistic.   He said WHAT! Yes remaining optimistic is key. Along with faith in God. Then there was this (for lack of better word) jerk.   Give it up already. You were too old to work in high tech 10 years ago. Scratch that, 20 years ago! Try selling hot dogs in front of Fry's Electronics. At least you would get a chance to eat lunch with your previous colleagues....   You know funny thing on this person is that I checked out his profile. He is older than I am.

    Read the article

  • Best Method For Evaluating Existing Software or New Software

    How many of us have been faced with having to decide on an off-the-self or a custom built component, application, or solution to integrate in to an existing system or to be the core foundation of a new system? What is the best method for evaluating existing software or new software still in the design phase? One of the industry preferred methodologies to use is the Active Reviews for Intermediate Designs (ARID) evaluation process.  ARID is a hybrid mixture of the Active Design Review (ADR) methodology and the Architectural Tradeoff Analysis Method (ATAM). So what is ARID? ARD’s main goal is to ensure quality, detailed designs in software. One way in which it does this is by empowering reviewers by assigning generic open ended survey questions. This approach attempts to remove the possibility for allowing the standard answers such as “Yes” or “No”. The ADR process ignores the “Yes”/”No” questions due to the fact that they can be leading based on how the question is asked. Additionally these questions tend to receive less thought in comparison to more open ended questions. Common Active Design Review Questions What possible exceptions can occur in this component, application, or solution? How should exceptions be handled in this component, application, or solution? Where should exceptions be handled in this component, application, or solution? How should the component, application, or solution flow based on the design? What is the maximum execution time for every component, application, or solution? What environments can this component, application, or solution? What data dependencies does this component, application, or solution have? What kind of data does this component, application, or solution require? Ok, now I know what ARID is, how can I apply? Let’s imagine that your organization is going to purchase an off-the-shelf (OTS) solution for its customer-relationship management software. What process would we use to ensure that the correct purchase is made? If we use ARID, then we will have a series of 9 steps broken up by 2 phases in order to ensure that the correct OTS solution is purchases. Phase 1 Identify the Reviewers Prepare the Design Briefing Prepare the Seed Scenarios Prepare the Materials When identifying reviewers for a design it is preferred that they be pulled from a candidate pool comprised of developers that are going to implement the design. The believe is that developers actually implementing the design will have more a vested interest in ensuring that the design is correct prior to the start of code. Design debriefing consist of a summary of the design, examples of the design solving real world examples put in to use and should be no longer than two hours typically. The primary goal of this briefing is to adequately summarize the design so that the review members could actually implement the design. In the example of purchasing an OTS product I would attempt to review my briefing prior to its distribution with the review facilitator to ensure that nothing was excluded that should have not been. This practice will also allow me to test the length of the briefing to ensure that can be delivered in an appropriate about of time. Seed Scenarios are designed to illustrate conceptualized scenarios when applied with a set of sample data. These scenarios can then be used by the reviewers in the actual evaluation of the software, All materials needed for the evaluation should be prepared ahead of time so that they can be reviewed prior to and during the meeting. Materials Included: Presentation Seed Scenarios Review Agenda Phase 2 Present ARID Present Design Brainstorm and prioritize scenarios Apply scenarios Summarize Prior to the start of any ARID review meeting the Facilitator should define the remaining steps of ARID so that all the participants know exactly what they are doing prior to the start of the review process. Once the ARID rules have been laid out, then the lead designer presents an overview of the design which typically takes about two hours. During this time no questions about the design or rational are allowed to be asked by the review panel as a standard, but they are written down for use latter in the process. After the presentation the list of compiled questions is then summarized and sent back to the lead designer as areas that need to be addressed further. In the example of purchasing an OTS product issues could arise regarding security, the implementation needed or even if this is this the correct product to solve the needed solution. After the Design presentation a brainstorming and prioritize scenarios process begins by reducing the seed scenarios down to just the highest priority scenarios.  These will then be used to test the design for suitability. Once the selected scenarios have been defined the reviewers apply the examples provided in the presentation to the scenarios. The intended output of this process is to provide code or pseudo code that makes use of the examples provided while solving the selected seed scenarios. As a standard rule, the designers of the systems are not allowed to help the review board unless they all become stuck. When this occurs it is documented and along with the reason why the designer needed to help the review panel back on track. Once all of the scenarios have been completed the review facilitator reviews with the group issues that arise during the process. Then the reviewers will be polled as to efficacy of the review experience. References: Clements, Paul., Kazman, Rick., Klien, Mark. (2002). Evaluating Software Architectures: Methods and Case Studies Indianapolis, IN: Addison-Wesley

    Read the article

  • Implement Tree/Details With Taskflow Regions Using EJB

    - by Deepak Siddappa
    This article describes on Display Tree/Details using taskflow regions.Use Case DescriptionLet us take scenario where we need to display Tree/Details, left region contains category hierarchy with items listed in a tree structure (ex:- Region-Countries-Locations-Departments in tree format) and right region contains the Employees list.In detail, Here User may drills down through categories using a tree until Employees are listed. Clicking the tree node name displays Employee list in the adjacent pane related to particular tree node. Implementation StepsThe script for creating the tables and inserting the data required for this application CreateSchema.sql Lets create a Java EE Web Application with Entities based on Regions, Countries, Locations, Departments and Employees table. Create a Stateless Session Bean and data control for the Stateless Session Bean. Add the below code to the session bean and expose the method in local/remote interface and generate a data control for that.Note:- Here in the below code "em" is a EntityManager. public List<Employees> empFilteredByTreeNode(String treeNodeType, String paramValue) { String queryString = null; try { if (treeNodeType == "null") { queryString = "select * from Employees emp ORDER BY emp.employee_id ASC"; } else if (Pattern.matches("[a-zA-Z]+[_]+[a-zA-Z]+[_]+[[0-9]+]+", treeNodeType)) { queryString = "select * from employees emp INNER JOIN departments dept\n" + "ON emp.department_id = dept.department_id JOIN locations loc\n" + "ON dept.location_id = loc.location_id JOIN countries cont\n" + "ON loc.country_id = cont.country_id JOIN regions reg\n" + "ON cont.region_id = reg.region_id and reg.region_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_countriesList_1")) { queryString = "select * from employees emp INNER JOIN departments dept \n" + "ON emp.department_id = dept.department_id JOIN locations loc \n" + "ON dept.location_id = loc.location_id JOIN countries cont \n" + "ON loc.country_id = cont.country_id and cont.country_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_locationsList_1")) { queryString = "select * from employees emp INNER JOIN departments dept ON emp.department_id = dept.department_id JOIN locations loc ON dept.location_id = loc.location_id and loc.city = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.trim().contains("regionsFindAll_bc_departmentsList_1")) { queryString = "select * from Employees emp INNER JOIN Departments dept ON emp.DEPARTMENT_ID = dept.DEPARTMENT_ID and dept.DEPARTMENT_NAME = '" + paramValue + "'"; } } catch (NullPointerException e) { System.out.println(e.getMessage()); } return em.createNativeQuery(queryString, Employees.class).getResultList(); } In the ViewController project, create two ADF taskflow with page Fragments and name them as FirstTaskflow and SecondTaskflow respectively. Open FirstTaskflow,from component palette drop view(Page Fragment) name it as TreeList.jsff. Open SeconfTaskflow, from component palette drop view(Page Fragment) name it as EmpList.jsff and create two paramters in its overview parameters tab as shown in below image. Open TreeList.jsff , from data control palette drop regionsFindAll->Tree as ADF Tree. In Edit Tree Binding dialog, for Tree Level Rules select the display attributes as follows:-model.Regions - regionNamemodel.Countries - countryNamemodel.Locations - citymodel.Departments - departmentName In structure panel, click on af:Tree - t1 and select selectionListener with edit property. Create a "TreeBean" managed bean with scope as "session" as shown in below Image. Create new method as getTreeNodeSelectedValue and click ok. Open TreeBean managed bean and add the below code: private String treeNodeType; private String paramValue; public void getTreeNodeSelectedValue(SelectionEvent selectionEvent) { RichTree tree = (RichTree)selectionEvent.getSource(); RowKeySet addedSet = selectionEvent.getAddedSet(); Iterator i = addedSet.iterator(); TreeModel model = (TreeModel)tree.getValue(); model.setRowKey(i.next()); JUCtrlHierNodeBinding node = (JUCtrlHierNodeBinding)tree.getRowData(); //oracle.jbo.Row Row rw = node.getRow(); Object selectedTreeNode = node.getAttribute(0); Object treeListType = node.getBindings(); String treeNodeType = treeListType.toString(); this.setParamValue(selectedTreeNode.toString()); this.setTreeNodeType(treeNodeType); } public void setTreeNodeType(String treeNodeType) { this.treeNodeType = treeNodeType; } public String getTreeNodeType() { return treeNodeType; } public void setParamValue(String paramValue) { this.paramValue = paramValue; } public String getParamValue() { return paramValue; }<br /> Open EmpList.jsff , from data control palette drop empFilteredByTreeNode->Employees->Table as ADF Read-only Table. After selecting the  Employees result set, in Edit Action Binding dialog window pass the pageFlowScope parameters as shown in below Image. In empList.jsff page, click Binding tab and click on Create Executable binding and select Invoke action and follow as shown in below image. Edit executeEmpFiltered invoke action properties and set the Refresh to ifNeeded, So when ever the page needs the method will be executed. Create Main.jspx page with page template as Oracle Three Column Layout. Drop FirstTaskflow as Region in start facet and drop SecondTaskflow as Region in center facet, Edit task Flow Binding dialog window pass the Input Paramters as shown in below Image. Run the Main.jspx, tree will be displayed in left region and emp details will displyaed on the right region. Click on the Americas in tree node, all emp related to the Americas related will be displayed. Click on Americas->United States of America->South San Francisco->Accounting, only employee belongs to the Accounting department will be displayed.

    Read the article

  • Scripting custom drawing in Delphi application with IF/THEN/ELSE statements?

    - by Jerry Dodge
    I'm building a Delphi application which displays a blueprint of a building, including doors, windows, wiring, lighting, outlets, switches, etc. I have implemented a very lightweight script of my own to call drawing commands to the canvas, which is loaded from a database. For example, one command is ELP 1110,1110,1290,1290,3,8388608 which draws an ellipse, parameters are 1110x1110 to 1290x1290 with pen width of 3 and the color 8388608 converted from an integer to a TColor. What I'm now doing is implementing objects with common drawing routines, and I'd like to use my scripting engine, but this calls for IF/THEN/ELSE statements and such. For example, when I'm drawing a light, if the light is turned on, I'd like to draw it yellow, but if it's off, I'd like to draw it gray. My current scripting engine has no recognition of such statements. It just accepts simple drawing commands which correspond with TCanvas methods. Here's the procedure I've developed (incomplete) for executing a drawing command on a canvas: function DrawCommand(const Cmd: String; var Canvas: TCanvas): Boolean; type TSingleArray = array of Single; var Br: TBrush; Pn: TPen; X: Integer; P: Integer; L: String; Inst: String; T: String; Nums: TSingleArray; begin Result:= False; Br:= Canvas.Brush; Pn:= Canvas.Pen; if Assigned(Canvas) then begin if Length(Cmd) > 5 then begin L:= UpperCase(Cmd); if Pos(' ', L)> 0 then begin Inst:= Copy(L, 1, Pos(' ', L) - 1); Delete(L, 1, Pos(' ', L)); L:= L + ','; SetLength(Nums, 0); X:= 0; while Pos(',', L) > 0 do begin P:= Pos(',', L); T:= Copy(L, 1, P - 1); Delete(L, 1, P); SetLength(Nums, X + 1); Nums[X]:= StrToFloatDef(T, 0); Inc(X); end; Br.Style:= bsClear; Pn.Style:= psSolid; Pn.Color:= clBlack; if Inst = 'LIN' then begin Pn.Width:= Trunc(Nums[4]); if Length(Nums) > 5 then begin Br.Style:= bsSolid; Br.Color:= Trunc(Nums[5]); end; Canvas.MoveTo(Trunc(Nums[0]), Trunc(Nums[1])); Canvas.LineTo(Trunc(Nums[2]), Trunc(Nums[3])); Result:= True; end else if Inst = 'ELP' then begin Pn.Width:= Trunc(Nums[4]); if Length(Nums) > 5 then begin Br.Style:= bsSolid; Br.Color:= Trunc(Nums[5]); end; Canvas.Ellipse(Trunc(Nums[0]),Trunc(Nums[1]),Trunc(Nums[2]),Trunc(Nums[3])); Result:= True; end else if Inst = 'ARC' then begin Pn.Width:= Trunc(Nums[8]); Canvas.Arc(Trunc(Nums[0]),Trunc(Nums[1]),Trunc(Nums[2]),Trunc(Nums[3]), Trunc(Nums[4]),Trunc(Nums[5]),Trunc(Nums[6]),Trunc(Nums[7])); Result:= True; end else if Inst = 'TXT' then begin Canvas.Font.Size:= Trunc(Nums[2]); Br.Style:= bsClear; Pn.Style:= psSolid; T:= Cmd; Delete(T, 1, Pos(' ', T)); Delete(T, 1, Pos(',', T)); Delete(T, 1, Pos(',', T)); Delete(T, 1, Pos(',', T)); Canvas.TextOut(Trunc(Nums[0]), Trunc(Nums[1]), T); Result:= True; end; end else begin //No space found, not a valid command end; end; end; end; What I'd like to know is what's a good lightweight third-party scripting engine I could use to accomplish this? I would hate to implement parsing of IF, THEN, ELSE, END, IFELSE, IFEND, and all those necessary commands. I need simply the ability to tell the scripting engine if certain properties meet certain conditions, it needs to draw the object a certain way. The light example above is only one scenario, but the same solution needs to also be applicable to other scenarios, such as a door being open or closed, locked or unlocked, and draw it a different way accordingly. This needs to be implemented in the object script drawing level. I can't hard-code any of these scripting/drawing rules, the drawing needs to be controlled based on the current state of the object, and I may also wish to draw a light a certain shade or darkness depending on how dimmed the light is.

    Read the article

  • Custom Configuration Section Handlers

    Most .NET developers who need to store something in configuration tend to use appSettings for this purpose, in my experience.  More recently, the framework itself has helped things by adding the <connectionStrings /> section so at least these are in their own section and not adding to the appSettings clutter that pollutes most apps.  I recommend avoiding appSettings for several reasons.  In addition to those listed there, I would add that strong typing and validation are additional reasons to go the custom configuration section route. For my ASP.NET Tips and Tricks talk, I use the following example, which is a simple DemoSettings class that includes two fields.  The first is an integer representing how many attendees there are present for the talk, and the second is the title of the talk.  The setup in web.config is as follows: <configSections> <section name="DemoSettings" type="ASPNETTipsAndTricks.Code.DemoSettings" /> </configSections>   <DemoSettings sessionAttendees="100" title="ASP.NET Tips and Tricks DevConnections Spring 2010" /> Referencing the values in code is strongly typed and straightforward.  Here I have a page that exposes two properties which internally get their values from the configuration section handler: public partial class CustomConfig1 : System.Web.UI.Page { public string SessionTitle { get { return DemoSettings.Settings.Title; } } public int SessionAttendees { get { return DemoSettings.Settings.SessionAttendees; } } } Note that the settings are only read from the config file once after that they are cached so there is no need to be concerned about excessive file access. Now weve seen how to set it up on the config file and how to refer to the settings in code.  All that remains is to see the file itself: public class DemoSettings : ConfigurationSection { private static DemoSettings settings = ConfigurationManager.GetSection("DemoSettings") as DemoSettings; public static DemoSettings Settings{ get { return settings;} }   [ConfigurationProperty("sessionAttendees" , DefaultValue = 200 , IsRequired = false)] [IntegerValidator(MinValue = 1 , MaxValue = 10000)] public int SessionAttendees { get { return (int)this["sessionAttendees"]; } set { this["sessionAttendees"] = value; } }   [ConfigurationProperty("title" , IsRequired = true)] [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;\"|\\")] public string Title { get { return (string)this["title"]; } set { this["title"] = value; }   } } The class is pretty straightforward, but there are some important components to note.  First, it must inherit from System.Configuration.ConfigurationSection.  Next, as a convention I like to have a static settings member that is responsible for pulling out the section when the class is first referenced, and further to expose this collection via a static readonly property, Settings.  Note that the types of both of these are the type of my class, DemoSettings. The properties of the class, SessionAttendees and Title, should map to the attributes of the config element in the XML file.  The [ConfigurationProperty] attribute allows you to map the attribute name to the property name (thus using both XML standard naming conventions and C# naming conventions).  In addition, you can specify a default value to use if nothing is specified in the config file, and whether or not the setting must be provided (IsRequired).  If it is required, then it doesnt make sense to include a default value. Beyond defaults and required, you can specify more advanced validation rules for the configuration values using additional C# attributes, such as [IntegerValidator] and [StringValidator].  Using these, you can declaratively specify that your configuration values be in a given range, or omit certain forbidden characters, for instance.  Of course you can write your own custom validation attributes, and there are others specified in System.Configuration. Individual sections can also be loaded from separate files, using syntax like this: <DemoSettings configSource="demosettings.config" /> Summary Using a custom configuration section handler is not hard.  If your application or component requires configuration, I recommend creating a custom configuration handler dedicated to your app or component.  Doing so will reduce the clutter in appSettings, will provide you with strong typing and validation, and will make it much easier for other developers or system administrators to locate and understand the various configuration values that are necessary for a given application. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • About Solaris 11 and UltraSPARC II/III/IV/IV+

    - by nospam(at)example.com (Joerg Moellenkamp)
    I know that I will get the usual amount of comments like "Oh, Jörg ? you can't be negative about Oracle" for this article. However as usual I want to explain the logic behind my reasoning. Yes ? I know that there is a lot of UltraSPARC III, IV and IV+ gear out there. But there are some very basic questions: Does your application you are currently running on this gear stops running just because you can't run Solaris 11 on it? What is the need to upgrade a system already in production to Solaris 11? I have the impression, that some people think that the systems get useless in the moment Oracle releases Solaris 11. I know that Sun sold UltraSPARC IV+ systems until 2009. The Sun SF490 introduced 2004 for example, that was a Sun SF480 with UltraSPARC IV and later with UltraSPARC IV+. And yes, Sun made some speedbumps. At that time the systems of the UltraSPARC III to IV+ generations were supported on Solaris 8, on Solaris 9 and on Solaris 10. However from my perspective we sold them to customers, which weren't able to migrate to Solaris 10 because they used applications not supported on Solaris 9 or who just didn't wanted to migrate to Solaris 10. Believe it or not ? I personally know two customers that migrated core systems to Solaris 10 in ? well 2008/9. This was especially true when the M3000 was announced in 2008 when it closed the darned single socket gap. It may be different at you site, however that's what I remember about that time when talking with customers. At first: Just because there is no Solaris 11 for UltraSPARC III, IV and IV+, it doesn't mean that Solaris 10 will go away anytime soon. I just want to point you to "Expect Lifetime Support - Hardware and Operating Systems". It states about Premier Support:Maintenance and software upgrades are included for Oracle operating systems and Oracle VM for a minimum of eight years from the general availability date.GA for Solaris 10 was in 2005. Plus 8 years ? 2013 ? at minimum. Then you can still opt for 3 years of "Extended Support" ? 2016 ? at minimum. 2016 your systems purchased in 2009 are 7 years old. Even on systems purchased at the very end of the lifetime of that system generation. That are the rules as written in the linked document. I said minimum The actual dates are even further in the future: Premier Support for Solaris 10 ends in 2015, Extended support ends 2018. Sustaining support ? indefinite. You will find this in the document "Oracle Lifetime Support Policy: Oracle Hardware and Operating Systems".So I don't understand when some people write, that Oracle is less protective about hardware investments than Sun. And for hardware it's the same as with Sun: Service 5 years after EOL as part of Premier Support. I would like to write about a different perspective as well: I have to be a little cautious here, because this is going in the roadmap area, so I will mention the public sources here: John Fowler told last year that we have to expect at at least 3x the single thread performance of T3 for T4. We have 8 cores in T4, as stated by Rick Hetherington. Let's assume for a moment that a T4 core will have the performance of a UltraSPARC core (just to simplify math and not to disclosing anything about the performance, all existing SPARC cores are considered equal). So given this pieces of information, you could consolidate 8 V215, 4 or 8 V245, 2 full blown V445,2 full blown 490, 2 full blown M3000 on a single T4 SPARC processor. The Fowler roadmap prezo talked about 4-socket systems with T4. So 32 V215, 16 to 8 V245, 8 fullblown V445, 8 full blown V490, 8 full blown M3000 in a system image. I think you get the idea. That said, most of the systems we are talking about have already amortized and perhaps it's just time to invest in new systems to yield other advantages like reduced space consumptions, like reduced power consumption, like some of the neat features sun4v gives you, and yes ? reduced number of processor licenses for Oracle and less money for Oracle HW/SW support. As much as I dislike it myself that my own UltraSPARC III and UltraSPARC II based systems won't run on Solaris 11 (and I have quite a few of them in my personal lab), I really think that the impact on production environments will be much less than most people think now. By the way: The reason for this move is a quite significant new feature. I will tell you that it was this feature, when it's out. I assume, telling just a word more could lead to much more time to blog.

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • BizTalk: Sample: Context routing and Throttling with orchestration

    - by Leonid Ganeline
    The sample demonstrates using orchestration for throttling and using context routing. Usually throttling is implemented on the host level (in BizTalk 2010 we can also using the host instance level throttling). Here is demonstrated the throttling with orchestration convoy that slows down message flow from some customers. Sample implements sort of quality service agreement layer for different kind of customers. The sample demonstrates the context routing between orchestrations. It has several advantages over the content routing. For example, we don’t have to create the property schema and promote properties on the schemas; we don’t have to change the message content to change routing. Use case:  The BizTalk application has a main processing orchestration that process all input messages. The application usually works as an OLTP application. Input messages came in random order without peaks, typical scenario for the on-line users. But sometimes the big data batch payloads come. These batches overload processing orchestrations. All processes, activated by on-line users after the payload, come to the same queue and are processed only after the payload. Result is on-line users can see significant delay in processing. It can be minutes or hours, depending of the batch size. Requirements: On-line user’s processing should work without delays. Big batches cannot disturb on-line users. There should be higher priority for the on-line users and the lower priority for the batches. Design: Decision is to divide the message flow in two branches, one for on-line users and second for batches. Branch with batches provides messages to the processing line with low priority, and the on-line user’s branch – with high priority. All messages are provided by hi-speed receive port. BTS.ReceivePortName context property is used for routing. The Router orchestration separates messages sent from on-line users and from the batch messages. But the Router does not use the BizTalk provided value of this property, the Router set up this value by itself. Router uses the content of the messages to decide if it is from on-line users or from batches. The message context property the BTS.ReceivePortName is changed respectively, its value works as a recipient address, as the “To” address for the next recipient orchestrations. Those next orchestrations are the BatchBottleneck and the MainProcess orchestrations. Messages with context equal “ToBatch” are filtered up by the BatchBottleneck orchestration. It is a unified convoy orchestration and it throttles the message flow, delaying the message delivery to the MainProcess orchestration. The BatchBottleneck orchestration changes the message context to the “ToProcess” and sends messages one after another with small delay in between. Delay can be configured in the BizTalk config file as:                 <appSettings>                                 <add key="GLD_Tests_TwoWayRouting_BatchBottleneck_DelayMillisec" value="100"/>                 </appSettings>   Of course, messages with context equal “ToProcess” are filtered up by the MainProcess orchestration.   NOTES: Filters with string values: In Orchestrations (the first Receive shape in orchestration) use string values WITH quotes; in Send Ports use string values WITHOUT quotes. Filters on the Send Ports are dynamic; we can change them in run-time. Filters on the Orchestrations are static; we can change them only in design-time. To check the existence of the promoted property inside orchestration use the Expression shape with construction like this:       if (BTS.ReceivePortName exists myMessage) { …; } It is not possible in the Message Assignment shape because using the “if” statement inside Message Assignment is prohibited. Several predefined context properties can behave in specific way. Say MessageTracking.OriginatingMessage or XMLNORM.DocumentSpecName, they are required some internal rules should be applied to the format or usage of this properties. MessageTracking.* parameters require you have to use tracking and you can get unexpected run-time errors in some cases. My recommendation is - use very limited set of the predefined context properties. To “attach” the new promoted property to the message, we have to use correlation. The correlation type should include this property. [Here is a good explanation by Saravana ] The sample code is here [sorry, temporary trubles with CodePlex].

    Read the article

  • Vendors: Partners or Salespeople?

    - by BuckWoody
    I got a great e-mail from a friend that asked about how he could foster a better relationship with his vendors. So many times when you work with a vendor it’s more of a used-car sales experience than a partnership – but you can actually make your vendor more of a partner, as long as you both set some ground-rules at the start. Sit down with your vendor, and have a heart-to-heart talk with them, explain that they won’t win every time, but that you’re willing to work with them in an honest way on both sides. Here’s the advice I sent him verbatim. I hope this post generates lots of comments from both customers and vendors. I don’t expect that you’ve had a great experience with your Microsoft reps, but I happen to work with some of the best sales teams in the business, and our clients tell us that all the time. “The key to this relationship is to keep the audience really small. Ideally there should be one person from your side that is responsible for the relationship, and one from the vendor’s side. Each responsible person should have the authority to make decisions, and to bring in other folks as needed for a given topic, project or decision.   For Microsoft, this is called an “Account Manager” – they aren’t technical, they aren’t sales. They “own” a relationship with a company. They learn what the company does, who does it, and how. They are responsible to understand what the challenges in your company are. While they don’t know the bits and bytes of everything we sell, they know what each thing does, and who to talk to about it. I get a call from an Account Manager every week that has pre-digested an issue at an organization and says to me: “I need you to set up an architectural meeting with their technical staff to get a better read on how we can help with problem X.” I do that and then report back to the Account Manager what we learned.  All through this process there’s the atmosphere of a “team”, not a “sales opportunity” per se. I’ve even recommended that the firm use a rival product, and I’ve never gotten push-back on that decision from my Account Managers.   But that brings up an interesting point. Someone pays an Account Manager and pays me. They expect something in return. At some point, you have to buy something. Not every time, not every situation – sometimes it’s just helping you with what you already bought from us. But the point is that you can’t expect lots of love and never spend any money. That’s the way business works.   Finally, don’t view the vendor as someone with their hand in your pocket – somebody that’s just trying to sell you something and doesn’t care if they ever see you again – unless they deserve it. There are plenty of “love them and leave them” companies out there, and you may have even had this experience with us, but that isn’t the case in the firms I work with. In fact, my customers get a questionnaire that asks them that exact question. “How many times have you seen your account team? Did you like your interaction with them? Can they do better?” My raises, performance reviews and general standing in my group are based on the answers the company gives.  Ask your vendor if they measure their sales and support teams this way – if not, seek another vendor to partner with.   Partnering with someone is a big deal. It involves time and effort on your part, and on the vendor’s part. If either of you isn’t pulling your weight, it just won’t work. You have every right to expect them to treat you as a partner, and they have the same right for your side.” Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • CodePlex Daily Summary for Sunday, September 29, 2013

    CodePlex Daily Summary for Sunday, September 29, 2013Popular ReleasesAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...Activity Viewer 2012: Activity Viewer 2012 V 5.0.0.3: Planning to add new features: 1. Import/Export rules 2. Tabular mode multi servers connections.Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...OfflineBrowser: Release v1.2: This release includes some multi-threading support, a better progress bar, more JavaScript fixes, and a help system. This release is also portable (can run with no issues from a flash drive).CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...CrmSvcUtil Generate Attribute Constants: Generate Attribute Constants (1.0.5018.28159): Built against version 5.0.15 of the CRM SDK Fixed issue where constant for primary key attribute was being duplicated in all entity classes Added ability to override base class for entity classesC# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen v2.1.3.api release: This is for the brave at heart, this is the maint release to update to the new movie api. please send feedback on fix requests.FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.JayData -The unified data access library for JavaScript: JayData 1.3.2 - Indian Summer Edition: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.2 - Indian Summer Edition For detai...ZXing.Net: ZXing.Net 0.12.0.0: sync with rev. 2892 of the java version new PDF417 decoder improved Aztec decoder global speed improvements direct Kinect support for ColorImageFrame better Structured Append support many other small bug fixes and improvementsNew ProjectsCACHEDB: CLIENT-DATABASE || CLIENT_CACHEDB-DATABASEClassic WiX Burn Theme: A WiX Burn theme inspired by the classic WiX wizard user interface.CryptStr.Fody: A post-build weaver that encrypts literal strings in your .NET assemblies without breaking ClickOnce.Easy Code: A setting framework.EduSoft: This is a school eg.GameStuff: GameStuff is a library of Physics and Geometrics concepts for video game. Nekora Test Project: Nekora test projectPopCorn Console Game: Simple console gameRadioController: This project started from people installing Tablets in Mustangs. You would typically loose most control of the radio. This projects brings that back!Random searcher i pochodne: Wyszukiwarka plików multimedialnych i czego dusza zapragnie.SporkRandom: A .NET (C#, Visual Basic) interface for the true random number generator service of random.org

    Read the article

  • Car animations in Frogger on Javascript

    - by Mijoro Nicolas Rasoanaivo
    I have to finish a Frogger game in Javascript for my engineering school degree, but I don't know how to animate the cars. Right now I tried to manipulate the CSS, the DOM, I wrote a script with a setTimeout(), but none of them works.Can I have some help please? Here's my code and my CSS: <html> <head> <title>Image d&eacute;filante</title> <link rel="stylesheet" type="text/css" href="map_style.css"/> </head> <body onload="start()"> <canvas id="jeu" width="800" height="450"> </canvas> <img id="voiture" class="voiture" src="car.png" onload="startTimerCar()"> <img id="voiture2" class="voiture" src="car.png" onload="startTimerCar()"> <img id="voiture3" class="voiture" src="car.png" onload="startTimerCar()"> <img id="bigrig" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="bigrig2" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="bigrig3" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="hotrod" src="hotrod.png" onload="startTimerHotrod()"> <img id="hotrod2" src="hotrod.png" onload="startTimerHotrod()"> <img id="turtle" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="turtle2" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="turtle3" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="small" src="log_small.png" onload="startTimerSmall()"> <img id="small2" src="log_small.png" onload="startTimerSmall()"> <img id="small3" src="log_small.png" onload="startTimerSmall()"> <img id="small4" src="log_small.png" onload="startTimerSmall()"> <img id="med" src="log_medium.png" onload="startTimerMedium()"> <img id="med2" src="log_medium.png" onload="startTimerMedium()"> <img id="med3" src="log_medium.png" onload="startTimerMedium()"> <script type="text/javascript"> var X = 1; var timer; function start(){ setInterval(init,10); document.onkeydown = move; var canvas = document.getElementById('jeu'); var context = canvas.getContext('2d'); var frog = document.getElementById('frog'); var posX_frog = 415; var posY_frog = 400; var voiture = [document.getElementById('voiture'),document.getElementById('voiture2'),document.getElementById('voiture3')]; var bigrig = [document.getElementById('bigrig'),document.getElementById('bigrig2'),document.getElementById('bigrig3')]; var hotrod = [document.getElementById('hotrod'),document.getElementById('hotrod2')]; var turtle = [document.getElementById('turtle'),document.getElementById('turtle2'),document.getElementById('turtle3')]; var small = [document.getElementById('small'),document.getElementById('small2'),document.getElementById('small3'),document.getElementById('small4')]; var med = [document.getElementById('med'),document.getElementById('med2'),document.getElementById('med3')]; function init() { context.fillStyle = "#AEEE00"; context.fillRect(0,0,800,50); context.fillRect(0,200,800,50); context.fillRect(0,400,800,50); context.fillStyle = "#046380"; context.fillRect(0,50,800,150); context.fillStyle = "#000000"; context.fillRect(0,250,800,150); var img= new Image(); img.src="./frog.png"; context.drawImage(img,posX_frog, posY_frog, 46, 38); } function move(event){ if (event.keyCode == 39){ if( posX_frog < 716 ){ posX_frog += 50; } } if(event.keyCode == 37){ if( posX_frog >25 ){ posX_frog -= 50; } } if (event.keyCode == 38){ if( posY_frog > 10 ){ posY_frog -= 50; } } if(event.keyCode == 40){ if( posY_frog <400 ){ posY_frog += 50; } } } } </script> </body> And my map_css: #jeu{ z-index:10; width: 800px; height: 450px; border: 2px black solid; overflow: hidden; position: relative; transition:width 2s; -moz-transition:width 2s; /* Firefox 4 */ -webkit-transition:width 2s; /* Safari and Chrome */ } #voiture{ z-index: 100; position: absolute; top: 315px; left: 48px; transition-timing-function: linear; -webkit-transition-timing-function: linear; -moz-transition-timing-function: linear; } #voiture2{ z-index: 100; position: absolute; top: 315px; left: 144px; } #voiture3{ z-index: 100; position: absolute; top: 315px; left: 240px; } #bigrig{ z-index: 100; position: absolute; top: 365px; left: 200px; } #bigrig2{ z-index: 100; position: absolute; top: 365px; left: 400px; } #bigrig3{ z-index: 100; position: absolute; top: 365px; left: 600px; } #hotrod{ z-index: 100; position: absolute; top: 265px; left: 200px; } #hotrod2{ z-index: 100; position: absolute; top: 265px; left: 500px; } #hotrod3{ z-index: 100; position: absolute; top: 265px; left: 750px; } #turtle{ z-index: 100; position: absolute; top: 175px; left: 50px; } #turtle2{ z-index: 100; position: absolute; top: 175px; left: 450px; } #turtle3{ z-index: 100; position: absolute; top: 175px; left: 250px; } #small{ z-index: 100; position: absolute; top: 125px; left: 20px; } #small2{ z-index: 100; position: absolute; top: 125px; left: 220px; } #small3{ z-index: 100; position: absolute; top: 125px; left: 420px; } #small4{ z-index: 100; position: absolute; top: 125px; left: 620px; } #med{ z-index: 100; position: absolute; top: 75px; left: 120px; } #med2{ z-index: 100; position: absolute; top: 75px; left: 320px; } #med3{ z-index: 100; position: absolute; top: 75px; left: 520px; } I had to say that I'm in the obligation to code in HTML5, CSS3, and Javascript but not jQuery, who is way more easier, I already created games in jQuery... It takes me too much time and too much code lines right here.

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >