Search Results

Search found 13313 results on 533 pages for 'hit count'.

Page 393/533 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • Beat detection and FFT

    - by Quincy
    So I am working on a platformer game which includes music with beat detection. I am currently using a simple if the energy that is stored in the history buffer is smaller then the current energy there is a beat. The problem with this is that ofcourse if you use songs like rock songs where you have a pretty steady amplitude this isn't going to work. So I looked further and found algorithms splitting the sound into multiple bands using FFT. I then found this : http://en.literateprograms.org/Cooley-Tukey_FFT_algorithm_(C) The only problem I'm having is that I am quite new to audio and I have no idea how to use that to split the signal up into multiple signals. So my question is : How do you use a FFT to split a signal into multiple bands ? Also for the guys interested, this is my algorithm in c# : // C = threshold, N = size of history buffer / 1024 public void PlaceBeatMarkers(float C, int N) { List<float> instantEnergyList = new List<float>(); short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; // Calculate instant energy for every 1024 samples. while (sampleIndex + nextSamples < samples.Length) { float instantEnergy = 0; for (int i = 0; i < nextSamples; i++) { instantEnergy += Math.Abs((float)samples[sampleIndex + i]); } instantEnergy /= nextSamples; instantEnergyList.Add(instantEnergy); if(sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; } int index = N; int numInBuffer = index; float historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } }

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • Should I break contract early?

    - by cbang
    About 7 months ago I made the switch from a 5 year permie role (as a support developer in C#) to a contract role. I did this because I was stagnating in my old role. The extra cash contracting is really helping too. Unfortunately my team leader has taken a dislike to me from day 1. He regularly tells me I went out contracting too early, and frequently remarks that people in their 20's have no idea what they are talking about (I am 29). I was recently given the task of configuring our reports via our in house reporting library. It works off of a database driven criteria base, with controls being loaded as needed. The configs can get fairly complex, with controls having various levels of dependency on each other. I had a short time frame to get 50 reports working, and I was told to just get the basic configuration done, after which they will be handed over to the reporting team for fine tuning, then the test team. Our updated system was deployed 2 weeks ago, and it turned out that about 15 reports had issues causing incorrect data to be returned. Upon investigation I discovered that the reporting team hadn't even looked at them, and the test team hadn't bothered to test the reports. In spite of this, my team leader has told me that it is 100% my fault. As a result, our help desk got hit hard. I worked back until 2am that night to fix the highest priority issues (on my wedding anniversary!). The next day I arrive at work at 7:45 am to continue with the fixes. I got no thanks, but keep getting repeatedly told by my manager that "I fucked up" and "this is all my fault". I told my team leader I would spend part of my weekend working to fix the remaining issues. His response was "so you fucking should! you fucked it all up!" in front of the rest of the team. I responded "No worries." and left. I spent a decent chunk of my weekend working on it. Within 2 business days of finding out about the issues, I had all the medium and high priority issues fixed. The only comments my team leader has made to me in the last 2 weeks is to tell me how I have caused a big mess, and to tell me it was all my fault. I get this multiple times a day. If I make any jokes to anyone else in the team, I get told not to be a smartass... even though the rest of the team jokes throughout the day. Apart from that, all I get is angry looks any time I am anywhere near the guy. I don't give any response other than "alright" or silence when he starts giving me a hard time. Today we found out that the pilot release for the next stage has been pushed back. My team leader has said this was caused by me (but the higher ups said no such thing). He also said I have "no understanding of the ramifications of my actions". My question is, should I break contract (I am contracted until June 30) and find another role? No one else in my team will speak up in my favour, as they are contractors too and have no interest in rocking the boat. I could complain to my team leaders boss, but I can't see that helping, as I will still be stuck in the same team. As this is my first contract, I imagine getting the next one will be hard without a reference. I can't figure out if this guy is trying to get me fired up to provoke a confrontation (the guy loves conflict), or if he is just venting anger, or what. Copping this blame day after day is really wearing me down and making me depressed... especially since I have a wife and kid to support).

    Read the article

  • Using Oracle BPM to Extend Oracle Applications

    - by Michelle Kimihira
    Author: Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware Customers often modify applications to meet their specific business needs - varying regulatory requirements, unique business processes, product mix transition, etc. Traditional implementation practices for such modifications are typically invasive in nature and introduce risk into projects, affect time-to-market and ease of use, and ultimately increase the costs of running and maintaining the applications. Another downside of these traditional implementation practices is that they literally cast the application in stone, making it difficult for end-users to tailor their individual work environments to meet specific needs, without getting IT involved. For many businesses, however, IT lacks the capacity to support such rapid business changes. As a result, adopting innovative solutions to change the economics of customization becomes an imperative rather than a choice. Let's look at a banking process in Siebel Financial Services and Oracle Policy Automation (OPA) using Oracle Business Process Management. This approach makes modifications simple, quick to implement and easy to maintain/upgrade. The process model is based on the Loan Origination Process Accelerator, i.e., a set of ready to deploy business solutions developed by Oracle using Business Process Management (BPM) 11g, containing customizable and extensible pre-built processes to fit specific customer requirements. This use case is a branch-based loan origination process. Origination includes a number of steps ranging from accepting a loan application, applicant identity and background verification (Know Your Customer), credit assessment, risk evaluation and the eventual disbursal of funds (or rejection of the application). We use BPM to model all of these individual tasks and integrate (via web services) with: Siebel Financial Services and (simulated) backend applications: FLEXCUBE for loan management, Background Verification and Credit Rating. The process flow starts in Siebel when a customer applies for loan, switches to OPA for eligibility verification and product recommendations, before handing it off to BPM for approvals. OPA Connector for Siebel simplifies integration with Siebel’s web services framework by saving directly into Siebel the results from the self-service interview. This combination of user input and product recommendation invokes the BPM process for loan origination. At the end of the approval process, we update Siebel and the financial app to complete the loop. We use BPM Process Spaces to display role-specific data via dashboards, including the ability to track the status of a given process (flow trace). Loan Underwriters have visibility into the product mix (loan categories), status of loan applications (count of approved/rejected/pending), volume and values of loans approved per processing center, processing times, requested vs. approved amount and other relevant business metrics. Summary Oracle recommends the use of Fusion Middleware as an extensions platform for applications. This approach makes modifications simple, quick to implement and easy to maintain/upgrade applications (by moving customizations away from applications to the process layer). It is also easier to manage processes that span multiple applications by using Oracle BPM. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Wireless acting weird ubuntu 12.04 LTS

    - by Philip Yeldhos
    I'm kinda new here, so please bear with me. My wireless driver is acting very weird. It shows my router's name, but when it is connecting (after entering the correct password), the icon on the tray is like, refreshing every once in a second, while showing the animation that it is connecting. And after a few seconds, error message come up saying that wireless network is disconnected. I installed the drive through "additional drivers". What info do you need? Somebody please help. philip@philip-HP-Mini-110-3100:~$ sudo iwconfig lo no wireless extensions. eth1 IEEE 802.11 ESSID:"" Mode:Managed Frequency:2.472 GHz Access Point: Not-Associated Bit Rate:72 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=5/5 Signal level=0 dBm Noise level=-96 dBm Rx invalid nwid:0 Rx invalid crypt:11 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 eth0 no wireless extensions. here's what lspci -v gave me: 02:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) Subsystem: Hewlett-Packard Company Device 1483 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at 52000000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information: Len=78 <?> Capabilities: [48] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [13c] Virtual Channel Capabilities: [160] Device Serial Number 00-00-82-ff-ff-3f-e0-2a Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, bcma, brcmsmac okay, i removed the driver additional drivers gave me. now, this is what has happened: lsmod gave me: philip@philip-HP-Mini-110-3100:~$ lsmod | grep brc brcmsmac 540875 0 mac80211 436455 1 brcmsmac brcmutil 14675 1 brcmsmac cfg80211 178679 2 brcmsmac,mac80211 crc8 12781 1 brcmsmac cordic 12487 1 brcmsmac and iwconfig gave me: philip@philip-HP-Mini-110-3100:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off eth0 no wireless extensions. and lspci -v gave me: 02:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) Subsystem: Hewlett-Packard Company Device 1483 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at 52000000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: brcmsmac Kernel modules: bcma, brcmsmac

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • View AccuWeather Forecasts in Google Chrome

    - by Asian Angel
    Being able to keep an eye on the weather while at work or browsing the Internet is definitely helpful. If you like detailed forecasts then join us as we take a look at the Forecastfox Weather extension for Google Chrome. Getting Started As soon as the Forecastfox Weather extension has finished installing you will automatically be presented with the “Customize Forecastfox Page”. The default setting is for New York with English measurement units. Enter your location into the blank and hit “Enter” to display the listing for your city/area. If you are presented multiple options to choose from simply click on the appropriate listing. Once you have your city/area displayed you will notice that it is possible to have access to weather forecasts for multiple locations. You can easily remove any unneeded listings with the “Remove Link”. For our example we removed the New York listing. Note: Click on desired locations and measurement units to automatically set them as defaults (no save button required). Forecastfox Weather in Action You can hover your mouse over the “Toolbar Button” to see the current weather conditions. Clicking on the “Toolbar Button” opens a popup window with the current conditions, 7 day forecast, and a static satellite image. If desired you can access additional details for the current weather conditions. Clicking on “details” opens a new tab with a nice bit of information such as UV Index, Moon Phases, Cloud Ceiling, etc. Note: AccuWeather.com webpages will have some ads displayed. Perhaps you need the Hourly Forecast… Once again a new tab will be opened with the predicted hourly weather conditions for the current day. Going back to the popup window you may also select a specific day from the 7 day forecast. You will be presented with a “Day & Night” forecast for the chosen day with links to view “Additional Details & Hourly” information. Interested in the satellite image instead? You can click on either of the available links for larger images. Once the new tab is open you can choose from a variety of different satellite images. Conclusion If you have been wanting a solid weather forecast extension for your Chrome browser then Forecastfox Weather is definitely a recommended install. Links Download the Forecastfox Weather extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeView Weather Underground Forecasts in Google ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Cellbi Silverlight Controls Giveaway (5 License to give away)

    - by mbcrump
    Cellbi recently updated their new Silverlight Controls to version 4 and to support Visual Studio 2010. I played with a couple of demos on their site and had to take a look. I headed over to their website and downloaded the controls. The first thing that I noticed was all of the special text effects and animations included. I emailed them asking if I could give away their controls in my January 2011 giveaway and they said yes. They also volunteered to give away 5 total license so the changes for you to win would increase.  I am very thankful they were willing to help the Silverlight community with this giveaway. So some quick rules below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of Cellbi Silverlight Controls! (5 License to give away) Random winner will be announced on February 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump and @cellbi http://mcrump.me/cscfree ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit Cellbi because they made this possible. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The What's new in this release page is here. You can also check out the live demos here. Don’t worry about the Samples/Help Documentation. That is installed to your local HDD during the installation process. Begin by downloading the trial version and running the program. After everything is installed then you will see the following screen: After it is installed, you may want to take a look at your Toolbox in Visual Studio 2010. After you add the controls from the “Choose Items” in Silverlight and you will see that you now have access to all of these controls. At this point, to use the controls it’s as simple as drag/drop onto your Silverlight container. It will create the proper Namespaces for you. It’s hard to show with a static screenshot just how powerful the controls actually are so I will refer you to the demo page to learn more about them. Since all of these are animations/effects it just doesn’t work with a static screenshot. It is worth noting that the Sfx pack really focuses on the following core effects: I will show you the best route to get started building a new project with them below. The best page to start is the sample browser which you can access by going to SvFx Launcher. In my case, I want to build a new Carousel. I simple navigate to the Carousel that I want to build and hit the “Cs” code at the top. This launches Visual Studio 2010 and now I can copy/paste the XAML into my project. That is all there is to it. Hopefully this post was helpful and don’t forget to leave a comment below in order to win a set of the controls!  Subscribe to my feed

    Read the article

  • Lease Accounting Closed for Comment

    - by Theresa Hickman
    December 15, 2010 marked the last day to send public comments to FASB and IASB on lease accounting. June 2011 is the deadline for the final consideration of the Leases Exposure Draft that will be given to standard setters in order to create a new lease accounting standard. Landlords, lessees, retailers, airlines industry, etc. are all worried right now about the changes to lease accounting. They feel the changes will be too costly and complex without adding significant improvement to the quality and relevance of financial statements. In a nutshell, IASB and FASB want to abolish operating leases where the lessee records the periodic payments as an expense over time. The proposed changes will mean that the accounting for leases will move from the P&L and hit both the lessee's and lessor's balance sheets. For companies that occupy a lot of property, this could significantly increase their liabilities not to mention front-load much of the costs that they were able to spread out over time before. Why are IASB and FASB doing this? Their goal is to have consistent accounting for both the lessees and lessors with higher quality financial statements. Leasing is one of four major projects being undertaken by the IASB and FASB in order to complete convergence between US GAAP and IFRS. I spoke to our resident accounting expert Seamus Moran about this to better understand how this might impact accounting software. He reminded me that the proposed changes to both US GAAP and IFRS in respect to leases are "proposed." It is still inappropriate to account for leases the way they are being proposed and we still need to account for them in accordance to the current regulations, which is what current accounting software programs, such as E-Business Suite Release 12.1 and prior and PeopleSoft Enterprise support. The FASB (US GAAP) and IASB (IFRS) exposure drafts (EDs) that outline the proposal were published. The FASB edition was published on August 17th, with comments due by December 15th. The IASB edition was published on the same date, and comments were due in London on the same date. Exposure drafts are the method both the FASB and the IASB use to solicit General Acceptance, the "GA" in GAAP. Both Boards will consider the input they have received, and perhaps revise the proposal. The proposal has come in for some criticism, both from the finance houses and the uses of the leased assets. There is, given the opposition to it, an excellent chance that the Leasing proposal will be modified or rewritten. We will know this in about six months, the usual time it takes for the FASB and IASB to digest the comments they receive. If they feel the proposal has General Acceptance, they will issue the final Standard at that time; if not, they will issue a revised proposal with another year of comment of drafting. Oracle participates in the standard setting process and is fully aware of the leasing proposal. We have designs that would reflect the proposal in hand. These designs will be finalized when the proposal is finalized. It is likely that customers will develop new financial arrangements if the proposal is finalized, and we are working with customers and partners to stay in touch with people's business responses to the proposal. The IASB and FASB are aware that ERP companies will have to revise their software, and that the companies filing results under IFRS or under US GAAP will have to implement such software. The form and timing of the release of the updated software will depend on the schedule of the take up of the new standard, the complexity of the standard, and the releases supported at the time the standard becomes effective.

    Read the article

  • SQL SERVER – Asynchronous Update and Timestamp – Check if Row Values are Changed Since Last Retrieve

    - by pinaldave
    Here is the question received just this morning. “Pinal, Our application is much different than other application you might have come across. In simple words, I would like to call it Asynchronous Updated Application. We need your quick opinion about one of the situation which we are facing. From business side: We have bidding system (similar to eBay but not exactly) and where multiple parties bid on one item, during the last few minutes of bidding many parties try to bid at the same time with the same price. When they hit submit, we would like to check if the original data which they retrieved is changed or not. If the original data which they have retrieved is the same, we will accept their new proposed price. If original data are changed, they will have to resubmit the data with new price. From technical side: We have a row which we retrieve in our application. Multiple users are retrieving the same row. Some of the users will update the value of the row and submit. However, only the very first user should be allowed to update the row and remaining all the users will have to re-fetch the row and updated it once again. We do not want to lock any record as that will create other problems. Do you have any solution for this kind of situation?” Fantastic Question. I believe there is good chance that we can use timestamp datatype in this kind of application. Before we continue let us see following simple example. USE tempdb GO CREATE TABLE SampleTable (ID INT, Col1 VARCHAR(100), TimeStampCol TIMESTAMP) GO INSERT INTO SampleTable (ID, Col1) VALUES (1, 'FirstVal') GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO UPDATE SampleTable SET Col1 = 'NextValue' GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO DROP TABLE SampleTable GO Now let us see the resultset. Here is the simple explanation of the scenario. We created a table with simple column with TIMESTAMP datatype. When we inserted a very first value the timestamp was generated. When we updated any value in that row, the timestamp was updated with the new value. Every single time when we update any value in the row, it will generate new timestamp value. Now let us apply this in an original question’s scenario. In that case multiple users are retrieving the same row. Everybody will have the same now same TimeStamp with them. Before any user update any value they should once again retrieve the timestamp from the table and compare with the timestamp they have with them. If both of the timestamp have the same value – the original row has not been updated and we can safely update the row with the new value. After initial update, now the row will contain a new timestamp. Any subsequent update to the same row should also go to the same process of checking the value of the timestamp they have in their memory. In this case, the timestamp from memory will be different from the timestamp in the row. This indicates that row in the table has changed and new updates should not be allowed. I believe timestamp can be very very useful in this kind of scenario. Is there any better alternative? Please leave a comment with the suggestion and I will post on the blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How Can I Start an Incognito/Private Browsing Window from a Shortcut?

    - by Jason Fitzpatrick
    Sometimes you just want to pop the browser open for a quick web search without reloading all your saved tabs; read on as we show a fellow reader how to make a quick private-browsing shortcut. Dear How-To Geek, I came up with a solution to my problem, but I need your help implementing it. I typically have a ton of tabs open in my web browser and, when I need to free up system resources when gaming or using a resource intense application, I shut down the web browser. The problem arises when I find myself needing to do quick web search while the browser is shut down. I don’t want to open it up, load all the tabs, and waste the resources in doing so all for a quick Google search. The perfect solution, it would seem, is to open up one of Chrome’s Incognito windows: it loads separate, it won’t open up all the old tabs, and it’s perfect for a quick Google search. Is there a way to launch Chrome with a single Incognito window open without having to open the browser in the normal mode (and load the bazillion tabs I have sitting there)? Sincerely, Tab Crazy That’s a rather clever work around to your problem. Since you’ve already done the hard work of figuring out the solution you need, we’re more than happy to help you across the finish line. The magic you seek is available via what are known as “command line options” which allow you to add additional parameters and switches onto a command.   By appending the command the Chrome shortcut uses, we can easily tell it to launch in Incognito mode. (And, for other readers following along at home, we can do the same thing with other browsers like Firefox). First, let’s look at Chrome’s default shortcut: If you right click on it and select the properties menu, you’ll see where the shortcut points: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" If you run that shortcut, you’ll open up normal browsing mode in Chrome and your saved tabs will all load. What we need to do is use the command line switches available for Chrome and tell it that we want it to launch an Incognito window instead. Doing so is as simple as appending the end of the “Target” box’s command line entry with -incognito, like so: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" -incognito We’d also recommend changing the icon to it’s easy to tell the default Chrome shortcut apart from your new Incognito shortcut. When you’re done, make sure to hit OK/Apply at the button to save the changes. You can recreate the same private-browsing-shortcut effect with other major web browsers too. Repeat shortcut editing steps we highlighted above, but change out the -incognito with -private (for Firefox and Internet Explorer) and -newprivatetab (for Opera). With just a simple command line switch applied, you can now launch a lightweight single browser window for those quick web searches without having to stop your game and load up all your saved tabs. Have a pressing tech question? Email us at [email protected] and we’ll do our best to answer it.

    Read the article

  • Unity GUI not in build, but works fine in editor

    - by Darren
    I have: GUITexture attached to an object A script that has GUIStyles created for the Textfield and Buttons that are created in OnGUI(). This script is attached to the same object in number 1 3 GUIText objects each separate from the above. A script that enables the GUITexture and the script in number 1 and 2 respectively This is how it is supposed to work: When I cross the finish line, number 4 script enables number 1 GUITexture component and number 2 script component. The script component uses one of number 3's GUIText objects to show you your best lap time, and also makes a GUI.Textfield for name entry and 2 GUI.Buttons for "Submit" and "Skip". If you hit "Submit" the script will submit the time. No matter which button you press, The remaining 2 GUIText objects from number 3 will show you the top 10 best times. For some reason, when I run it in editor, everything works 100%, but when I'm in different kinds of builds, the results vary. When I am in a webplayer, The GUITexture and the textfield and buttons appear, but the textfield and buttons are plain and have no evidence of GUIStyles. When I click one of the buttons, the score gets submitted but I do not get the fastest times showing. When I am in a standalone build, the GUITexture shows up, but nothing else does. If I remove the GUIStyle parameter of the GUI.Textfield and GUI.Button, they show up. Why am I getting these variations and how can I fix it? Code below: void Start () { Names.text = ""; Times.text = ""; YourBestTime.text = "Your Best Lap: " + bestTime + "\nEnter your name:"; //StartCoroutine(GetTimes("Test")); } void Update() { if (!ShowButtons && !GettingTimes) { StartCoroutine(GetTimes()); GettingTimes = true; } } IEnumerator GetTimes () { Debug.Log("Getting times"); YourBestTime.text = "Loading Best Lap Times"; WWW times_get = new WWW(GetTimesUrl); yield return times_get; WWW names_get = new WWW(GetNamesUrl); yield return names_get; if(times_get.error != null || names_get.error != null) { print("There was an error retrieiving the data: " + names_get.error + times_get.error); } else { Times.text = times_get.text; Names.text = names_get.text; YourBestTime.text = "Your Best Lap: " + bestTime; } } IEnumerator PostLapTime (string Name, string LapTime) { string hash= MD5.Md5Sum(Name + LapTime + secretKey); string bestTime_url = SubmitTimeUrl + "&Name=" + WWW.EscapeURL(Name) + "&LapTime=" + LapTime + "&hash=" + hash; Debug.Log (bestTime_url); // Post the URL to the site and create a download object to get the result. WWW hs_post = new WWW(bestTime_url); //label = "Submitting..."; yield return hs_post; // Wait until the download is done if (hs_post.error != null) { print("There was an error posting the lap time: " + hs_post.error); //label = "Error: " + hs_post.error; //show = false; } else { Debug.Log("Posted: " + hs_post.text); ShowButtons = false; PostingTime = false; } } void OnGUI() { if (ShowButtons) { //makes text box nameString = GUI.TextField( new Rect((Screen.width/2)-111, (Screen.height/2)-130, 222, 25), nameString, 20, TextboxStyle); if (GUI.Button( new Rect( (Screen.width/2-74.0f), (Screen.height/2)- 90, 64, 32), "Submit", ButtonStyle)) { //SUBMIT TIME if (nameString == "") { nameString = "Player"; } if (!PostingTime) { StartCoroutine(PostLapTime(nameString, bestTime)); PostingTime = true; } } else if (GUI.Button( new Rect( (Screen.width/2+10.0f), (Screen.height/2)- 90, 64, 32), "Skip", ButtonStyle)) { ShowButtons = false; } } } }

    Read the article

  • My grid based collision detection is slow

    - by Fibericon
    Something about my implementation of a basic 2x4 grid for collision detection is slow - so slow in fact, that it's actually faster to simply check every bullet from every enemy to see if the BoundingSphere intersects with that of my ship. It becomes noticeably slow when I have approximately 1000 bullets on the screen (36 enemies shooting 3 bullets every .5 seconds). By commenting it out bit by bit, I've determined that the code used to add them to the grid is what's slowest. Here's how I add them to the grid: for (int i = 0; i < enemy[x].gun.NumBullets; i++) { if (enemy[x].gun.bulletList[i].isActive) { enemy[x].gun.bulletList[i].Update(timeDelta); int bulletPosition = 0; if (enemy[x].gun.bulletList[i].position.Y < 0) { bulletPosition = (int)Math.Floor((enemy[x].gun.bulletList[i].position.X + 900) / 450); } else { bulletPosition = (int)Math.Floor((enemy[x].gun.bulletList[i].position.X + 900) / 450) + 4; } GridItem bulletItem = new GridItem(); bulletItem.index = i; bulletItem.type = 5; bulletItem.parentIndex = x; if (bulletPosition > -1 && bulletPosition < 8) { if (!grid[bulletPosition].Contains(bulletItem)) { for (int j = 0; j < grid.Length; j++) { grid[j].Remove(bulletItem); } grid[bulletPosition].Add(bulletItem); } } } } And here's how I check if it collides with the ship: if (ship.isActive && !ship.invincible) { BoundingSphere shipSphere = new BoundingSphere( ship.Position, ship.Model.Meshes[0].BoundingSphere.Radius * 9.0f); for (int i = 0; i < grid.Length; i++) { if (grid[i].Contains(shipItem)) { for (int j = 0; j < grid[i].Count; j++) { //Other collision types omitted else if (grid[i][j].type == 5) { if (enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].isActive) { BoundingSphere bulletSphere = new BoundingSphere(enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].position, enemy[grid[i][j].parentIndex].gun.bulletModel.Meshes[0].BoundingSphere.Radius); if (shipSphere.Intersects(bulletSphere)) { ship.health -= enemy[grid[i][j].parentIndex].gun.damage; enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].isActive = false; grid[i].RemoveAt(j); break; //no need to check other bullets } } else { grid[i].RemoveAt(j); } } What am I doing wrong here? I thought a grid implementation would be faster than checking each one.

    Read the article

  • Essential Links for the SharePoint Client Side Developer

    - by Mark Rackley
    Front End Developer? Client Side Developer? Middle Tier??? I’m covering all my bases.  Regardless, I’m sick and tired of Googling with Bing when I forget where information that I need often is located. I was getting ready to bookmark some of them when it hit me… “Hey Mark… (I don’t actually refer to myself in the third person), Why don’t you put the links in a blog so that it looks like you are being helpful!” I can’t tell you how many times I’ve had to go back to some of my old blogs to remember how I did something. Seriously people, you need to start a blog, it’s the best way to remember how the frick you got something to work… and it looks like you are being helpful when in reality you are just forgetful.  So… where was I? Oh yeah.. essential information that I’ve needed from time to time when I was not using Visual Studio. All of this info has come in handy from time to time. Know about these things and keep them in your tool belt, it’s amazing the stuff you can accomplish with just knowing where to look. What Why SPServices Widely used library written by Marc Anderson used to call SharePoint Web Services with jQuery jQuery For SPServices and other cool stuff Easy Tabs Essential tool for quick page enhancements. This widely used too from Christophe Humbert groups multiple web parts into one tabbed display. Very quick and easy way to get oohs and ahs from End Users. Convert Calculated Columns to HTML Also from Christophe, I use this script all the time to convert html in my calculated columns to actually display as html and not with the tags. Unlocking the Mysteries of Data View Web Part XSL Tags This blog series from Marc Anderson makes it very easy to understand what’s going on with all those weird xsl tags in your data view web parts. Essential to make those things do what you want them to do. Creating Parent / Child list relationships (2007) Creating Parent / Child list relationships (2010) By far my most viewed blog posts (tens and tens of thousands).  I have posts for both 2007 and 2010 that walk you through automatically setting the lookup id on a list to its “parent”. Set SharePoint Form fields using Query String Variables Also widely read, this one walks you through taking a variable from your Query String and set a form field to that value.   Hmmm… I KNOW there are more, but I’m tired and drawing a blank.  I’ll try to add them when I remember them (or need them again and think “Oh, I forgot to add that one”) But it’s a start, and please feel free to add your own in the comments… So, it’s YOUR turn to be helpful. What little tip or trick do you find yourself using ALL the time that you think everyone should know about??

    Read the article

  • How to Create SharePoint List and Insert List Item programmatically from a Windows Forms Application.

    - by Michael M. Bangoy
    In this post I’m going to demonstrate how to create SharePoint List and also Insert Items on the List from a Windows Forms Application. 1. Open Visual Studio and create a new project. On the project template select Windows Form Application under C#. 2. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI.  3. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 4. Open the Form1 in design view and from the Toolbox menu add a button on the form surface. Your form should look like the one below. 5. Double click the button to open the code view. Add Using statement to reference the Sharepoint Client Library then create method for the Create List. Your code should like the codes below. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client; namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "urlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void cmdcreate_Click(object sender, EventArgs e)         {             try             {                 // declare the ClientContext Object                 SP.ClientContext _clientcontext = new SP.ClientContext(_context);                 SP.Web _site = _clientcontext.Web;                 // declare a ListCreationInfo                 SP.ListCreationInformation _listcreationinfo = new SP.ListCreationInformation();                 // set the Title and the Template of the List to be created                 _listcreationinfo.Title = "NewListFromCOM";                 _listcreationinfo.TemplateType = (int)SP.ListTemplateType.GenericList;                 // Call the add method to the ListCreatedInfo                 SP.List _list = _site.Lists.Add(_listcreationinfo);                 // Add Description field to the List                 SP.Field _Description = _list.Fields.AddFieldAsXml(@"                                     <Field Type='Text'                                         DisplayName='Description'>                                     </Field>", true, SP.AddFieldOptions.AddToDefaultContentType);                 // declare the List item Creation object for creating List Item                 SP.ListItemCreationInformation _itemcreationinfo = new SP.ListItemCreationInformation();                 // call the additem method of the list to insert a new List Item                 SP.ListItem _item = _list.AddItem(_itemcreationinfo);                 _item["Title"] = "New Item from Client Object Model";                 _item["Description"] = "This item was added by a Windows Forms Application";                 // call the update method                 _item.Update();                 // execute the query of the clientcontext                 _clientcontext.ExecuteQuery();                 // dispose the clientcontext                 _clientcontext.Dispose();                 MessageBox.Show("List Creation Successfull");             }             catch(Exception ex)             {                 MessageBox.Show("Error creating list" + ex.ToString());             }          }     } } 6. Hit F5 to run the application. A message will be displayed on the screen if the operation is successful and also if it fails. 7. To make that the operation of our Windows Form Application has really created the List and Inserted an item on it. Let’s open our SharePoint site. Once the SharePoint is open click on the Site Actions then View All Site Content. 7. Click the List to open it and check if an Item is inserted. That’s it. Hope this helps.

    Read the article

  • JavaOne Latin America 2012 Trip Report

    - by reza_rahman
    JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. The conference was a resounding success with a great vibe, excellent technical content and numerous world class speakers. Some notable local and international speakers included Bruno Souza, Yara Senger, Mattias Karlsson, Vinicius Senger, Heather Vancura, Tori Wieldt, Arun Gupta, Jim Weaver, Stephen Chin, Simon Ritter and Henrik Stahl. Topics covered included the JCP/JUGs, Java SE 7, HTML 5/WebSocket, CDI, Java EE 6, Java EE 7, JSF 2.2, JMS 2, JAX-RS 2, Arquillian and JavaFX. Bruno Borges and I manned the GlassFish booth at the Java Pavilion on Tuesday and Webnesday. The booth traffic was decent and not too hectic. We met a number of GlassFish adopters including perhaps one of the largest GlassFish deployments in Brazil as well as some folks migrating to Java EE from Spring. We invited them to share their stories with us. We also talked with some key members of the local Java community. Tuesday evening we had the GlassFish party at the Tribeca Pub. The party was definitely a hit and we could have used a larger venue (this was the first time we had the GlassFish party in Brazil). Along with GlassFish enthusiasts, a number of Java community leaders were there. We met some of the same folks again at the JUG leader's party on Wednesday evening. On Thursday Arun Gupta, Bruno Borges and I ran a hands-on-lab on JAX-RS, WebSocket and Server-Sent Events (SSE) titled "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". This is the same Java EE 7 lab run at JavaOne San Francisco. The lab provides developers a first hand glipse of how an HTML 5 powered Java EE application might look like. We had an overflow crowd for the lab (at one point we had about twenty people standing) and the lab went very well. The slides for the lab are here: Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket from Reza Rahman The actual contents for the lab is available here. Give me a shout if you need help getting it up and running. I gave two solo talks following the lab. The first was on JMS 2 titled "What’s New in Java Message Service 2". This was essentially the same talk given by JMS 2 specification lead Nigel Deakin at JavaOne San Francisco. I talked about the JMS 2 simplified API, JMSContext injection, delivery delays, asynchronous send, JMS resource definition in Java EE 7, standardized configuration for JMS MDBs in EJB 3.2, mandatory JCA pluggability and the like. The session went very well, there was good Q & A and someone even told me this was the best session of the conference! The slides for the talk are here: What’s New in Java Message Service 2 from Reza Rahman My last talk for the conference was on JAX-RS 2 in the keynote hall. Titled "JAX-RS 2: New and Noteworthy in the RESTful Web Services API" this was basically the same talk given by the specification leads Santiago Pericas-Geertsen and Marek Potociar at JavaOne San Francisco. I talked about the JAX-RS 2 client API, asyncronous processing, filters/interceptors, hypermedia support, server-side content negotiation and the like. The talk went very well and I got a few very kind complements afterwards. The slides for the talk are here: JAX-RS 2: New and Noteworthy in the RESTful Web Services API from Reza Rahman On a more personal note, Sao Paulo has always had a special place in my heart as the incubating city for Sepultura and Soulfy -- two of my most favorite heavy metal musical groups of all time! Consequently, the city has a perpertually alive and kicking metal scene pretty much any given day of the week. This time I got to check out a solid performance by local metal gig Republica at the legendary Manifesto Bar. I also wanted to see a Dio Tribute at the Blackmore but ran out of time and energy... Overall I enjoyed the conference/Sao Paulo and look forward to going to Brazil again next year!

    Read the article

  • StreamInsight 1.0 Released

    - by Roman Schindlauer
    One piece in the set of products offered in SQL Server 2008 R2 that has generated a lot of buzz and interest during its CTP phase is StreamInsight, Microsoft’s platform for Complex Event Processing. Microsoft’s information platform vision provides enterprises with a “complete approach” to managing information assets, enabling all businesses to gain strategic value from information from the desktop to the datacenter to the cloud. And StreamInsight V1 is one essential piece in this spectrum. After more than a year of blood, sweat, tears, and insane amounts of coffee we are proud to release the first version of our Complex Event Processing Framework.   Those of you who have been following our Community Technology Previews (CTPs) throughout last year have already had the possibility to familiarize themselves with the product. Early feedback was not only incredibly positive, but also very constructive and strongly influenced the final feature set. Four notable increments over our last public CTP are: Count windows Non-occurrence detection (Anti-Join) Dynamic query composition at runtime Synchronize time across input streams Additionally, many smaller issues and bugs were addressed. A few APIs slightly changed with respect to the November CTP, but porting your application to RTM should not require a lot of effort.   Here are the (english) bits - choosing the evaluation license during setup lets you already play with this version. Before you install, make sure to uninstall any previous CTP version:   StreamInsight X86StreamInsight X64   Within a few days, we will update our product page and add download links and instructions there as well.   The StreamInsight documentation is provided through a help file as part of the installation as well as through Books Online on MSDN. We also invite you to visit the StreamInsight Blog and the StreamInsight Forum, which is a great place to discuss questions and issues with the community and the development team.   Regards,Roman Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • Wireless Broadcom 4313 not working on Ubuntu 12.04

    - by user88568
    It seems a lot of people are having this problem, but none of the posted solutions have worked for me so far. My driver is installed and activated, I have tried removing and re-adding the network, and various other fixes. No networks were picked up on my first boot, the next day wireless worked fine, and since then it does not detect networks, and when I manually try to connect, it repeatedly asks for the password and does not connect. Here's my info: 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) Subsystem: Dell Inspiron M5010 / XPS 8300 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f0500000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information: Len=78 Capabilities: [48] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [13c] Virtual Channel Capabilities: [160] Device Serial Number 00-00-a1-ff-ff-f3-70-f1 Capabilities: [16c] Power Budgeting Kernel driver in use: wl Kernel modules: wl, bcma, brcmsmac root@michelle-laptop:/home/michelle# ifconfig eth0 Link encap:Ethernet HWaddr 70:f1:a1:f3:ba:ab inet6 addr: fe80::72f1:a1ff:fef3:baab/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:29 TX packets:0 errors:30 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 eth1 Link encap:Ethernet HWaddr f0:4d:a2:53:83:7a UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:104 errors:0 dropped:0 overruns:0 frame:0 TX packets:104 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8000 (8.0 KB) TX bytes:8000 (8.0 KB) root@michelle-laptop:/home/michelle# lsmod Module Size Used by dm_crypt 22528 0 snd_hda_codec_hdmi 31775 1 snd_hda_codec_realtek 174313 1 snd_hda_intel 32765 5 snd_hda_codec 109562 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec snd_pcm 80845 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25424 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event parport_pc 32114 0 ppdev 12849 0 binfmt_misc 17292 1 lib80211_crypt_tkip 17275 0 bnep 17830 2 rfcomm 38139 0 snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq joydev 17393 0 wl 2646601 0 snd 62064 19 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd btusb 17912 0 bluetooth 158438 11 bnep,rfcomm,btusb uvcvideo 67203 0 videodev 86588 1 uvcvideo snd_page_alloc 14108 2 snd_hda_intel,snd_pcm lib80211 14040 2 lib80211_crypt_tkip,wl intel_ips 17822 0 psmouse 72919 0 serio_raw 13027 0 mei 36570 0 dell_laptop 17767 0 dell_wmi 12601 0 dcdbas 14098 1 dell_laptop sparse_keymap 13658 1 dell_wmi mac_hid 13077 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp usbhid 41906 0 hid 77367 1 usbhid wmi 18744 1 dell_wmi i915 414817 3 atl1c 36718 0 drm_kms_helper 45466 1 i915 drm 197692 4 i915,drm_kms_helper i2c_algo_bit 13199 1 i915 video 19068 1 i915 root@michelle-laptop:/home/michelle# iwlist scan lo Interface doesn't support scanning. eth1 Interface doesn't support scanning. eth0 No scan results root@michelle-laptop:/home/michelle# rfkill list 0: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 3: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: no Obviously I don't know what I'm doing. I'd appreciate any help! Thanks!

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • Impressions on jQuery Mobile

    - by Jeff
    For the uninitiated, jQuery Mobile is a sweet little client framework that turns regular HTML into something more touch and mobile friendly. It results in a user interface that has bigger targets, rounded corners and simple skinning capability. When it was announced that ASP.NET MVC 4 would include support for a mobile-sensitive view engine, offering up alternate views for clients that fit the mobile profile, I was all over that. Combined with jQuery Mobile, it brought a chance to do some experimentation. I blitzed through the views in POP Forums and converted them all to mobile views. (For the curious, this first pass can be found here on CodePlex, while a more recent update that uses RC 2 of jQuery Mobile v1.1.0 is running on the demo site.) Initially, it was kind of a mixed bag. The jQuery demo site also acts as documentation, and it’s reasonably complete. I had no problem getting up a lot of basic views quickly, splitting out portions of some pages as subpages that they quickly load in. The default behavior in the older version was to slide the pages in, which looked a little weird when you were using a back button. They’ve since changed it so the default transition is a fade in/out. Because you’re dealing with Web pages, I don’t think anyone is really under the illusion that you’re not using a native app, so I don’t know that this matters. I’ve tested extensively on iPad and Windows Phone, and to be honest, I’ve encountered a lot of issues. On Windows Phone, there is some kind of inconsistency that prevents the proper respect for the viewport settings. The text background on text fields (for labeling) doesn’t work, either. On both platforms, certain in-DOM page navigation links work only half of the time. Is this an issue of user error? Probably, but that’s what’s frustrating about it. Most of what you accomplish with this framework involves decorating various elements with CSS classes. There isn’t any design-time safety to speak of to make sure that you’re doing it right. I think the issues can be overcome, but there are some trade-offs to consider. The first is download size. Yes, the scripts and CSS do get cached, but that first hit will cost nearly 40k for the mobile parts. That’s still a lot when you’re on some crappy AT&T EDGE network, or hotel Wi-Fi. Then you have to ask yourself, do you really want your app to look like it’s native to iOS? I’m not saying that’s a bad thing, because consistent UI is good, but you will end up feeling a whole lot of sameness, and maybe you don’t want that. I did some experimentation to try and Metro-ize the jQuery Mobile theme, and it’s kind of a mixed bag. It mostly works, but you get some weirdness on badges and with buttons that I’m not crazy about. It probably just means you need to keep tweaking. At this point, I’m a little torn about whether or not I’ll use it for POP Forums or one of the sites I’m working on. The benefits are pretty strong, but figuring out where I’m doing it wrong is proving a little time consuming.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >