Search Results

Search found 14767 results on 591 pages for 'twitter api'.

Page 193/591 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Two free SQL Server events I'll be presenting at in UK. Come and say hi!

    - by Mladen Prajdic
    SQLBits: April 7th - April 9th 2011 in Brighton, UK Free community event on Saturday (April 9th) with a paid conference day on Friday (April 8th) and a Pre Conference day full of day long seminars (April 7th). It'll be a huge event with over 800 attendees and over 20 MVPs. I'll be presenting on Saturday April 9th.     SQL in the City: July 15th 2011 in London, UK One day of free SQL Server training sponsored by Redgate. Other MVP's that'll be presenting there are Steve Jones (website|twitter), Brad McGehee (blog|twitter) and Grant Fritchey (blog|twitter)   At both conferences I'll be presenting about database testing. In the sessions I'll cover a few things from my book The Red Gate Guide to SQL Server Team based Development like what do we need for testing, how to go about it, what are some of the obstacles we have to overcome, etc… If you're around there come and say Hi!

    Read the article

  • REST doesn't work with Sever-Client-Client setup

    - by drozzy
    I am having a problem with my current RESTful api design. What I have is a REST api which is consumed by Django web-server, which renders the HTML templates. REST api > Django webserver > HTML The problem I am encountering is that I have to reconstruct all the URLS like mysite.com/main/cities/<id>/streets/ into equivalent rest api urls on my web-server layer: api.com/cities/<id>/streets/ Thus I have a lot of mapping back and forth, but as far as I know REST says that the client (in this case my web-server) should NOT need to know how to re-construct the urls. Can REST be used for such a setup and how? Or is it only viable for Server-Client architecture. Thanks

    Read the article

  • Is there a DRMAA Java library that works with Torque/PBS?

    - by dondondon
    Does anybody know a Java implementation of the DRMAA-API that is known to work with PBS/Torque cluster software? The background behind this: I would like to submit jobs to a newly set-up linux cluster from java using a DRMAA compliant API. The cluster is managed by PBS/Torque. Torque includes PBS DRMAA 1.0 library for Torque/PBS that contains a DRMA-C binding and provides in libdrmaa.so and .a binaries. I know that Sun grid engine includes a drmaa.jar providing a Java-DRMAA API. In fact I opted to use SGE but it was decided to try PBS first. The theory behind that decision was: 'DRMAA is a standard and therefore a Java API needs only a standards compliant drmaa-c binding.' However, I couldn't find such 'general DRMAA-C-java API' and now assume that this assumption is wrong and that the Java libraries are engine specific. Are we stuck here? Any comments appreciated.

    Read the article

  • How do I bind a jQuery Tools overlay event to an existing overlay?

    - by Paul
    Say when the page loads, this code runs: jQuery(document).ready(function($){ $('#overlay').overlay( api: true ); }); How would I bind an event to it? I've tried: $('#overlay').onBeforeLoad( function(){ alert('Hi'); }); $('#overlay').bind( 'onBeforeLoad', function(){ alert('Hi'); }); var api = $('#overlay').data('overlay'); api.onBeforeLoad(function(){ alert('Hi') }); When I do: alert(api.getContent().attr('id')); An alert pops up with '#overlay' inside. When the overlay is open and I run: alert(api.isOpened()); An alert pops up with 'false' inside. Thanks in advance.

    Read the article

  • Android Apps: What is the recommended targetSdk for broadest appeal?

    - by johnrock
    I have an Android app that only needs internet access and would like to target API level 3 (1.5) to reach the broadest handset base. However, it appears that targeting API level 3 implicitly requires two additional permissions that are visible to users: modify sd card, and read phone state. See: http://stackoverflow.com/questions/1747178/android-permissions-phone-calls-read-phone-state-and-identity) So the connundrum, do I target API level 4 and turn away users running 1.5, or do I target API level 3 and turn away users who are upset that my app is requesting so many permissions that it shouldn't need? What is the smartest thing to do here? Are there really a lot of users still limited to API level 3? I appreciate any wisdom offered! Thanks!

    Read the article

  • advantages of Zend_Db_Table vs raw (My)SQL?

    - by sunwukung
    Currently working on a new Zend application and developing the Model. Having worked with Zend_Db_Table before, I opted to replace references in the Model to the Table API with a custom SQL script to take care of data access duties. Now I'm looking at developing a new application/domain model, and I wanted to get some feedback from people re: their experiences with Zend_Db API vs raw SQL, and use cases where it would be preferable to use the API. From a project perspective, the DB platform is unlikely to change from MySQL - so it doesn't need to be particularly abstract - and I assume writing a custom SQL API will be more performant than the assorted classes the Zend DB API requires.

    Read the article

  • over reporting in google analytics - social media stats

    - by colmcq
    I have a client and their traffic from social media reads thus: 80% from facebook 1% from Twitter This suggestsd they are not exploiting twitter at all and this was in my presentation but my boss took it out claiming twitter stats are under reported in google analytics. I can't substantiate this claim and wonder where she got this idea from. Can anyone shed light on this? Are my stats wrong and should I disregard these figures? but 80-1 seems like one hell of an under report! thanks c

    Read the article

  • How to get corresponding build artifacts of a job in jenkins ?

    - by kaleeswaran14
    I create Jenkins jobs using hudson.cli.CLI jar. I have selected "Archive the artifacts" option in the "Post-build steps" section. It archives the artifacts on each succesfull build. I am using jenkins remote access api http://localhost:8080/job/job_name/api/json to get details about jobs. and http://localhost:8080/job/job_name/job_number/api/json to get details about builds. When I delete a build corresponding archived artifacts are not deleted. I'd like to make sure that they are deleted. When I use jenkins remote access api http://localhost:8080/job/[job_name]/[job_number]/api/json for a build, it returns json data which contains all previously archived artifacts (other successful builds artifacts) with this (running build) build artifact. How do I get related artifact of a build (a successful build should return its artifact, not all previous successfull artifacts). Any suggestions or ideas ?

    Read the article

  • Rails 3 respond_to: default format?

    - by bdorry
    I am converting a Rails 2 application to Rails 3. I currently have a controller set up like the following: class Api::RegionsController < ApplicationController respond_to :xml, :json end with and an action that looks like the following: def index @regions = Region.all respond_with @regions end The implementation is pretty straightforward, api/regions, api/regions.xml and api/regions.json all respond as you would expect. The problem is that I want api/regions by default to respond via XML. I have consumers that expect an XML response and I would hate to have them change all their URLs to include .xml unless absolutely necessary. In Rails 2 you would accomplish that by doing this: respond_to do |format| format.xml { render :xml => @region.to_xml } format.json { render :json => @region.to_json } end But in Rails 3 I cannot find a way to default it to an XML response. Any ideas?

    Read the article

  • Can't run the ActionBarCompat sample

    - by David Miler
    I am having trouble compiling and running the ActionBarCompat sample of Android 16. I have API level 16 as the build target selected, which seems to build fine, but when I try to debug these errors pop up. Of course I could change the min API level in the manifest, but what would be the point of that? I have made no changes to the sample, so how come it is not working properly? Class requires API level 14 (current min is 3): android.view.ActionProvider SimpleMenuItem.java /ActionBarCompat/src/com/example/android/actionbarcompat line 129 Android Lint Problem Class requires API level 14 (current min is 3): android.view.ActionProvider SimpleMenuItem.java /ActionBarCompat/src/com/example/android/actionbarcompat line 134 Android Lint Problem Class requires API level 14 (current min is 3): android.view.MenuItem.OnActionExpandListener SimpleMenuItem.java /ActionBarCompat/src/com/example/android/actionbarcompat line 155 Android Lint Problem I am thoroughly confused, any help would be appreciated.

    Read the article

  • Exception in Google App Engine (Java) while trying to create Memcache Object

    - by Shreeni
    I am coming back to an old Google App Engine project on which I saw a bug. During this lag, I have been upgrading my AppEngine SDK and is now set at 1.3. When I try to run the same project again, I see the following exception: java.lang.NoSuchMethodError: com.google.apphosting.api.ApiProxy$Environment.getDefaultNamespace()Ljava/lang/String; at com.google.appengine.api.NamespaceManager.get(NamespaceManager.java:56) at com.google.appengine.api.memcache.MemcacheServiceImpl.setNamespace(MemcacheServiceImpl.java:181) at com.google.appengine.api.memcache.MemcacheServiceImpl.(MemcacheServiceImpl.java:145) at com.google.appengine.api.memcache.MemcacheServiceFactory.getMemcacheService(MemcacheServiceFactory.java:25) The line causing the problem is: CacheManager.getInstance().getCacheFactory().createCache(Collections.emptyMap()); (It is the same line as suggested by the AppEngine documentation to create a memcache object. It used to work fine previously. ) Any suggestions on how to fix it?

    Read the article

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • Suds array of arrays not nesting

    - by joshcartme
    Let me preface this by saying that I am still pretty new to SOAP and how things should work. I'm working with the Vertical Response API. I'm having trouble getting suds to construct the xml correctly for a request. Here is some code: from suds.client import Client url = 'https://api.verticalresponse.com/wsdl/1.0/VRAPI.wsdl' client = Client(url) vr = client.service ... test_list = ( ( { 'name' : 'email_address', 'value' : login['username'], }, { 'name' : 'First_Name', 'value' : 'VR_User', } ), ( { 'name' : 'email_address', 'value' : '[email protected]', }, { 'name' : 'First_Name', 'value' : login['username'], }, ), ) # sid and cid are correctly retrieved prior to this point print "Sending test message..." vr.sendEmailCampaignTest({ 'session_id' : sid, 'campaign_id' : cid, 'recipients' : test_list, }) In this context login['username'] is just an email address. That code raises this error: suds.WebFault: Server raised fault: 'Application failed during request deserialization: Too many elements in array. 4 instead of claimed 2 (2) Here is the the definition of sendEmailCampaignTest: http://developers.verticalresponse.com/api/soap/methods/campaigns/sendemailcampaigntest/ Here is the xml that logging outputs (I removed the session_id and list_id for display here): DEBUG:suds.client:headers = {'SOAPAction': u'"VR/API/1_0#sendEmailCampaignTest"', 'Content-Type': 'text/xml; charset=utf-8'} ERROR:suds.client:<?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:ns3="http://api.verticalresponse.com/1.0/VRAPI.xsd" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://www.w3.org/2001/XMLSchema" xmlns:ns2="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns4="VR/API/1_0" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Header/> <ns0:Body> <ns4:sendEmailCampaignTest> <args xsi:type="ns3:sendEmailCampaignTestArgs"> <session_id xsi:type="ns1:string">redacted</session_id> <campaign_id xsi:type="ns1:int">redacted</campaign_id> <recipients xsi:type="ns3:ArrayOfNVDictionary" ns2:arrayType="ns3:NVDictionary[2]"> <item> <name xsi:type="ns1:string">email_address</name> <value xsi:type="ns1:string">[email protected]</value> </item> <item> <name xsi:type="ns1:string">First_Name</name> <value xsi:type="ns1:string">VR_User</value> </item> <item> <name xsi:type="ns1:string">email_address</name> <value xsi:type="ns1:string">[email protected]</value> </item> <item> <name xsi:type="ns1:string">First_Name</name> <value xsi:type="ns1:string">[email protected]</value> </item> </recipients> </args> </ns4:sendEmailCampaignTest> </ns0:Body> </SOAP-ENV:Envelope> DEBUG:suds.client:http failed: <?xml version="1.0" encoding="UTF-8"?><SOAP-ENV:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><SOAP-ENV:Fault><faultcode>SOAP-ENV:Client</faultcode><faultstring>Application failed during request deserialization: Too many elements in array. 4 instead of claimed 2 (2) </faultstring></SOAP-ENV:Fault></SOAP-ENV:Body></SOAP-ENV:Envelope> a ruby script (provided by Vertical Response) that does the same things as the script I am working on in python outputs the following xml (I removed the session_id and list_id): <?xml version="1.0" encoding="utf-8" ?> <env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <env:Body> <n1:sendEmailCampaignTest xmlns:n1="VR/API/1_0" env:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <args xmlns:n2="http://api.verticalresponse.com/1.0/VRAPI.xsd" xsi:type="n2:sendEmailCampaignTestArgs"> <session_id xsi:type="xsd:string">redacted</session_id> <campaign_id xsi:type="xsd:int">redacted</campaign_id> <recipients xmlns:n3="http://schemas.xmlsoap.org/soap/encoding/" xsi:type="n3:Array" n3:arrayType="n2:NVDictionary[2]"> <item xsi:type="n3:Array" n3:arrayType="n2:NVPair[2]"> <item> <name xsi:type="xsd:string">email_address</name> <value href="#id9496430"></value> </item> <item> <name xsi:type="xsd:string">First_Name</name> <value xsi:type="xsd:string">VR_User</value> </item> </item> <item xsi:type="n3:Array" n3:arrayType="n2:NVPair[2]"> <item> <name xsi:type="xsd:string">email_address</name> <value xsi:type="xsd:string">[email protected]</value> </item> <item> <name xsi:type="xsd:string">First_Name</name> <value href="#id9496430"></value> </item> </item> </recipients> </args> </n1:sendEmailCampaignTest> <value id="id9496430" xsi:type="xsd:string" env:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">[email protected]</value> </env:Body> </env:Envelope> I understand that the error is in the construction of recipients. It should contain two items, each that contain two items but my python script using suds is setting it up to contain four unnested items. So my question is how can I get suds to correctly construct the xml?

    Read the article

  • Cassandra API equivalent of "SELECT ... FROM ... WHERE id IN ('...', '...', '...');"

    - by knorv
    Assume the following data set: id age city phone == === ==== ===== alfred 30 london 3281283 jeff 43 sydney 2342734 joe 29 tokyo 1283881 kelly 54 new york 2394929 molly 20 london 1823881 rob 39 sydney 4928381 To get the following result set .. id age phone == === ===== alfred 30 3281283 joe 29 1283881 molly 20 1823881 .. using SQL one would issue .. SELECT id, age, phone FROM dataset WHERE id IN ('alfred', 'joe', 'molly'); What is the corresponding Cassandra API call that would yield the same result set in one command?

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Thursday, December 30, 2010

    CodePlex Daily Summary for Thursday, December 30, 2010Popular ReleasesVarddienis - Windows Sidebar sikriks: Varddienis 0.9.5.0: Pievienota "Svetku" funkcija; Tiek paraditi Latvijas valsts svetki, atceres un atzimejamas dienas. Pievienoti “Atrie taustini” jeb isceli (laujot atrak izmantot Varddiena iespejas); Pievienota jauna poga - "Isceli", kas lauj lietotajam apskatit Varddieni pieejamos iscelus, un to taustinus. Nelielas izmainas: Nedaudz uzlabots JavaScript kods, Izmainits lidojošo logu aizveršanas krustinš – tagad tas klust dzeltens, ja uz ta uzbrauc ar peli; Ieverojami parkartoti un samazinati sik...SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 8: 1. added truncate table/defrag index/check db functions 2. improved alert 3. fixed problem with alert causing config file corrupted(hopefully)People's Note: People's Note 0.20: Version 0.20 is all about polishing the UI and supporting other developers. Several screens got better handling of the "close" button and the Escape key. The ink note screen got more traditional sketching colours, instead of the primaries. It also got a greater brush size. Messages for network errors have been improved. The Evernote API library got a Win32 target. Readme.txt was updated with additional instructions. To install: copy the appropriate CAB file onto your WM device and run i...ASP.NET Comet Ajax Library (Reverse Ajax - Server Push): Object Cache Sample: Object cache sample for Windows Forms Applications. This sample project demonstrates the usage of PCache class.Analysis Services Stored Procedure Project: 1.3.5 Release: This release includes the following fixes and new functionality: Updates to GetCubeLastProcessedDate to work with perspectives Fixes to reports that call Discover functions improving drillthrough functions against perspectives improving ExecuteDrillthroughAndFixColumns logic fixing situation where MDX query calling certain ASSP sprocs which opened external connections caused deadlock to SSAS processing small fix to Partition code when DbColumnName property doesn't exist changes...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)AutoLoL: AutoLoL v1.5.0: Added the all new Masteries Browser which replaces the Quick Open combobox AutoLoL will now attemt to create file associations for mastery (*.lolm) files Each Mastery Build can now contain keywords that the Masteries Browser will use for filtering Changed the way AutoLoL detects if another instance is already running Changed the format of the mastery files to allow more information stored in* Dialogs will now focus the Ok or Cancel button which allows the user to press Return to clo...Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Razor Templating Engine: Razor Templating Engine v1.2: Changes: ADDED: Standard namespaces imports for all templates: System, System.Collections.Generic, System.Linq (Changeset 5635) ADDED: Methods for Precompilation (Changeset 3283) CHANGED: Refactored precompilation to be exposed per-TemplateService. (Changeset 3440) CHANGED: Added more descriptive compilation exception message. (Changeset 3629) FIXED: Forced reference to Microsoft.CSharp to correct support for testing frameworks. (Changeset 3689) FIXED: Added support for nested anonymous obj...Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!EnhSim: EnhSim 2.2.7 ALPHA: 2.2.7 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Mongoose has bee...LINQ to Twitter: LINQ to Twitter Beta v2.0.19: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.Euro for Windows XP: ChangeRegionalSettings 1..0: *Flickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...New ProjectsAlquerque.net: Quer mostrar todo seu potencial? Destacar sua idéia inovadora? Desenvolva uma solução na plataforma .NET e prove que você está preparado para o mercado!Buccaneer: Buccaneer is a very extensive version of the windows explorer, which can be even furter extended with selfmade plugins. It is developed in c#.Chianti: Project Chianticomputing pi with a webcam: computing pi with a webcam and a spinning plate using buffon's methodConfree for Outlook: Confree lets you create audio conferences from Claro / Telmex directly from Outlook.Esurfing: EsurfingFoursquare Helper for WebMatrix: The Foursquare Helper for WebMatrix makes it simple to integrate Foursquare in your site. With a few lines of code you'll be able to show an "Add to My Foursquare" button or show any user's badges in your site.GetSatisfaction Helper for WebMatrix: The GetSatisfaction Helper for WebMatrix allows you to easily integrate GetSatistaction feedback functionality into your site. It provides a set of widgets for your users to share their ideas, questions, problems, and praises.GoodStore: ??????????,??B/S??,asp.net?sqlserver???,???????,?????????。。。Groupon Helper for WebMatrix: The Groupon Helper for WebMatrix allows you to easily add a Groupon badge to your WebMatrix site. When the helper is in place, it can query the Groupon API to get the deals for a given location, for you to display them in new, different ways. HarrierSight: An extensible application for analyze of spatial dataiPlay: iPlay is a WPF application built using MVVM for generating iTunes playlists randomly and displaying the status of iTunes in a user friendly way. It's secondary purpose is to explore the capabilities of WPF and MVVM in a contextual way.iSun Studio CMS: SNS Kiiro: An easy to use collaboration and project management application built for SharePoint. Kiiro lets your team collaborate on projects, documents, discussions, tasks and issues all within a simple, easy-to-use interface.Kiva7: The Kiva7 app is a Windows Phone 7 app for www.kiva.com. It shows all information on your loans and allows you to search for new loans. LeanEngine Framework: The LeanEngine framework makes it easier and faster for developers to develop .Net data centric applications. It's developed in C# language.MedSpeech: Guardado de grabaciones sobre los estudios radiologicos para luego poder realizar reportes de estos.Open Gran Turismo: Open Gran Turismo is an opensource car racing game highly customizable developed in XNA and BEPU physics. Opt.Net: Command line options and arguments parsing library for .NET 3.5 and 4.0 programs. Uses reflection to convert command line arguments and options into property values on an object that the application defines. Will support command pattern programs as well.Plancast Helper for WebMatrix: The Plancast Helper for WebMatrix provides an easy way to integrate Plancast on your WebMatrix site. With a few lines of code you'll able to show your Plancast plans or the ones from your friends. Polldaddy Helper for WebMatrix: The Polldaddy helper makes easy to add Poll widgets, ratings and surveys to your WebMatrix site in a few lines of code. It also provides access to the Polldaddy API, wrapping some of the API methods to retrieve Poll data.Scribd Helper for WebMatrix: The Scribd Helper for WebMatrix allows you to easily add Scribd documents to your site. When the helper is in place, it interacts with the Scribd API and with Scribd Reader to easily list your documents, enabling users to view them without having to leave your site. SharePoint 2010 Custom List Form Demo: This example will show you how to create a custom list form for SharePoint 2010 in Visual Studio 2010 using SharePoint Designer 2010 and VS2010... See my blog for a "Walkthrough": http://ikarstein.wordpress.comTdd unit test bar for Windows Phone development: a simple application which launches NUnit-Console on your Windows Phone unit tests every time you build, using a SilverLight version of NUnit. The output is then colored for better readability: Green bar if success, Red bar if failure.Techne: Techne is a program which will take a user input of a color or picture and then using motors to pipette paint will manually create the colors described by the user and draw a picture. The goal is to continually expand the system into performing more complicated tasks.TelerikTest1: ????Telerik??111111111TriExporterNET: TriExporter .NET Twitter Helper for WebMatrix: The Twitter Helper for WebMatrix makes it simple to integrate several Twitter social features in your site. For example, you can display Twitter widgets like "Follow Me" and "Tweet" Buttons, and access the Timeline Resources exposed by the Twitter API in a few lines of code.windows azure backup: If you need to have a backup of your files using Windows Azure, then this is the project to download. It incorporates an "admin" user and his backups. Can upload/retrieve files from local hard drive. Made with ASP.NET MVC.WP7PrintHelper: This project is aimed at the Windows Phone 7 developers that need to print from an App they are developing. The project provides a WCF service that runs on any desktop or server, and a print dialog dll that runs on the Windows 7 phone. It is developed in Framework 4.0 Client C#Wufoo Helper for WebMatrix: The Wufoo Helper for WebMatrix provides an easy way to integrate Wufoo forms and data into your WebMatrix site. It allows you to add Wufoo forms in your pages and integrate the data submitted in your forms by using Web Hooks.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Using Fancybox with Google Static Maps

    - by Levi Hackwith
    Setup I have multiple links on a page with the class location_link Each Links rel attribute is equal to a city state combo (i.e.,Omaha, NE) Once the page is loaded, a JavaScript function loops through all of the location_link items and binds a click event to them using jQuery. This click event fires a call to the Fancybox constructor that is supposed to show a Google Map of the location that link is associated with The Problem: Whenever I click on one of the "location links", I get the following error message: The requested content cannot be loaded. Please try again later. Code I've Already Written: function setUpLocationLinks() { locationLinks = $("a.location_link"); locationLinks.click( function() { var me = $(this); console.log(me.attr("href")); $.fancybox( { "showCloseButton" : true, "hideOnContentClick" : true, "titlePosition" : "inside", "title" : me.attr("rel"), "type" : "image" } ) return false; } ); } Research I've Already Done: The Google Static Map API no longer requires an API Key. The following is from the Google Static Maps API Page Note: The Google Static Maps API no longer requires a Maps API key! (Google Maps API Premier customers should instead sign their URLs using a new cryptographic key which will be sent to you. See the Premier documentation for more information.) The The Image URL I'm using does resolve and pulls back the data I need When I put the above mentioned URL into a standard <img> tag, the map shows up just fine. I'd like to pull this off without having to create some sort of dummy <img> tag that I'm constantly switching the src attribute out of. Hopefully, you'll find this information helpful. Please let me know if you have any other questions.

    Read the article

  • Authentication problem with Wufoo

    - by fudgey
    I set up a Wufoo form with admin only portions that will only show up if I am logged in. I read through the Wufoo API documentation and I can get the authenication to work, but when I try to access the form after I authenticate, it says I need to authenticate. This is what I have so far (subdomain, api key & form id changed) <?php error_reporting(E_ALL); ini_set('display_errors', 1); $curl1 = curl_init('http://fishbowl.wufoo.com/api/v3/users.xml'); curl_setopt($curl1, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl1, CURLOPT_USERPWD, 'AOI6-LFKL-VM1Q-IEX9:footastic'); curl_setopt($curl1, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($curl1, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($curl1, CURLOPT_FOLLOWLOCATION, false); curl_setopt($curl1, CURLOPT_USERAGENT, 'Wufoo Sample Code'); $response = curl_exec($curl1); $resultStatus = curl_getinfo($curl1); if($resultStatus['http_code'] == 200) { echo 'success!<br>'; } else { echo 'Call Failed '.print_r($resultStatus); } $curl2 = curl_init("http://fishbowl.wufoo.com/api/v3/forms/w7x1p5/entries.json"); curl_setopt($curl2, CURLOPT_HEADER, 0); curl_setopt($curl2, CURLOPT_RETURNTRANSFER, 1); $response = curl_exec($curl2); curl_close ($curl2); echo $response; curl_close($curl1); ?> It doesn't matter if I close $curl1 before or after I call $curl2, I get the same message on my screen: success! You must authenticate to get at the goodies. and I know the api, subdomain and form id are all correct. And one last bonus question... can I do all of this using Ajax instead? - the page I will be displaying the form on will already be limited to admin access, so exposing the API shouldn't matter.

    Read the article

  • PocketPC c++ windows message processing recursion problem

    - by user197350
    Hello, I am having a problem in a large scale application that seems related to windows messaging on the Pocket PC. What I have is a PocketPC application written in c++. It has only one standard message loop. while (GetMessage (&msg, NULL, 0, 0)) { { TranslateMessage (&msg); DispatchMessage (&msg); } } We also have standard dlgProc's. In the switch of the dlgProc, we will call a proprietary 3rd party API. This API uses a socket connection to communicate with another process. The problem I am seeing is this: whenever two of the same messages come in quickly (from the user clicking the screen twice too fast and shouldn't be) it seems as though recursion is created. Windows begins processing the first message, gets the api into a thread safe state, and then jumps to process the next (identical ui) message. Well since the second message also makes the API call, the call fails because it is locked. Because of the design of this legacy system, the API will be locked until the recursion comes back out (which also is triggered by the user; so it could be locked the entire working day). I am struggling to figure out exactly why this is happening and what I can do about it. Is this because windows recognizes the socket communication will take time and preempts it? Is there a way I can force this API call to complete before preemption? Is there a way I can slow down the message processing or re-queue the message to ensure the first will execute (capturing it and doing a PostMessage back to itself didnt work). We don't want to lock the ui down while the first call completes. Any insight is greatly appreciated! Thanks!!

    Read the article

  • java: can't use constructors in abstract class

    - by ufk
    Hi. I created the following abstract class for job scheduler in red5: package com.demogames.jobs; import com.demogames.demofacebook.MysqlDb; import org.red5.server.api.IClient; import org.red5.server.api.IConnection; import org.red5.server.api.IScope; import org.red5.server.api.scheduling.IScheduledJob; import org.red5.server.api.so.ISharedObject; import org.apache.log4j.Logger; import org.red5.server.api.Red5; /** * * @author ufk */ abstract public class DemoJob implements IScheduledJob { protected IConnection conn; protected IClient client; protected ISharedObject so; protected IScope scope; protected MysqlDb mysqldb; protected static org.apache.log4j.Logger log = Logger .getLogger(DemoJob.class); protected DemoJob (ISharedObject so, MysqlDb mysqldb){ this.conn=Red5.getConnectionLocal(); this.client = conn.getClient(); this.so=so; this.mysqldb=mysqldb; this.scope=conn.getScope(); } protected DemoJob(ISharedObject so) { this.conn=Red5.getConnectionLocal(); this.client=this.conn.getClient(); this.so=so; this.scope=conn.getScope(); } protected DemoJob() { this.conn=Red5.getConnectionLocal(); this.client=this.conn.getClient(); this.scope=conn.getScope(); } } Then i created a simple class that extends the previous one: public class StartChallengeJob extends DemoJob { public void execute(ISchedulingService service) { log.error("test"); } } The problem is that my main application can only see the constructor without any parameters. with means i can do new StartChallengeJob() why doesn't the main application sees all the constructors ? thanks!

    Read the article

  • What should the standard be for ReSTful URLS?

    - by gargantaun
    Since I can't find a chuffing job, I've been reading up on ReST and creating web services. The way I've interpreted it, the future is all about creating a web service for all your data before you build the web app. Which seems like a good idea. However, there seems to be a lot of contradictory thoughts on what the best scheme is for ReSTful URLs. Some people advocate simple pretty urls http://api.myapp.com/resource/1 In addition, some people like to add the API version to the url like so http://api.myapp.com/v1/resource/1 And to make things even more confusing, some people advocate adding the content-type to get requests http://api.myapp.com/v1/resource/1.xml http://api.myapp.com/v1/resource/1.json http://api.myapp.com/v1/resource/1.txt Whereas others think the content-type should be sent in the HTTP header. Soooooooo.... That's a lot of variation, which has left me unsure of what the best URL scheme is. I personally see the merits of the most comprehensive URL that includes a version number, resource locator and content-type, but I'm new to this so I could be wrong. On the other hand, you could argue that you should do "whatever works best for you". But that doesn't really fit with the ReST mentality as far as I can tell since the aim is to have a standard. And since a lot of you people will have more experience than me with ReST, I thought I'd ask for some guidance. So, with all that in mind... What should the standard be for ReSTful URLS?

    Read the article

  • How to correctly load dependent JavaScript files

    - by Vaibhav Garg
    I am trying to extent a website page that displays google maps with the LabeledMarker. Google Maps API defines a class called GMarker which is extended by the LabeledMarker. The problem is, I cant seem to load the LabeledMarker script properly, i.e. after the Google API loads and I get the 'GMarker not defined' error. What is the correct way to specify the scripts in such cases? I am using ASP.NET's ClientScript.RegisterClientScriptInclude() first for the google API url and then immediately after with the LabeledMarker script file. The initial google API loader writes further script links that load the actual GMarker class. Shouldnt all those scripts be executed before the next script block(LabeledMarker script) is processed. I have checked the generated HTML and the script blocks are emitted in the right order. <script src="google api url" type="text/javascript"></script> ... (the above scripts uses document.write() etc to append further script blocks/sources) ... <script src="Scripts/LabeledMarker.js" type="text/javascript"></script> Once again, the LabeledMarker.js seems to get executed before the google API finishes loading.

    Read the article

  • Is there a fundamental difference between malloc and HeapAlloc (aside from the portability)?

    - by Lambert
    Hi, I'm having code that, for various reasons, I'm trying to port from the C runtime to one that uses the Windows Heap API. I've encountered a problem: If I redirect the malloc/calloc/realloc/free calls to HeapAlloc/HeapReAlloc/HeapFree (with GetProcessHeap for the handle), the memory seems to be allocated correctly (no bad pointer returned, and no exceptions thrown), but the library I'm porting says "failed to allocate memory" for some reason. I've tried this both with the Microsoft CRT (which uses the Heap API underneath) and with another company's run-time library (which uses the Global Memory API underneath); the malloc for both of those works well with the library, but for some reason, using the Heap API directly doesn't work. I've checked that the allocations aren't too big (= 0x7FFF8 bytes), and they're not. The only problem I can think of is memory alignment; is that the case? Or other than that, is there a fundamental difference between the Heap API and the CRT memory API that I'm not aware of? If so, what is it? And if not, then why does the static Microsoft CRT (included with Visual Studio) take some extra steps in malloc/calloc before calling HeapAlloc? I'm suspecting there's a difference but I can't think of what it might be. Thank you!

    Read the article

  • MS Access: Permission problems with views

    - by Keith Williams
    "I'll use an Access ADP" I said, "it's only a tiny project and I've got better things to do", I said, "I can build an interface really quickly in Access" I said. </sarcasm> Sorry for the rant, but it's Friday, I have a date in just under two hours, and I'm here late because this just isn't working - so, in despair, I turn to SO for help. Access ADP front-end, linked to a SQL Server 2008 database Using a SQL Server account to log into the database (for testing); this account is a member of the role, "Api"; this role has SELECT, EXECUTE, INSERT, UPDATE, DELETE access to the "Api" schema The "Api" schema is owned by "dbo" All tables have a corresponding view in the Api schema: e.g. dbo.Customer -- Api.Customers The rationale is that users don't have direct table access, but can deal with views as if they were tables I can log into SQL using my test login, and it works fine: no access to the tables, but I can select, insert, update and delete from the Api views. In Access, I see the views, I can open them, but whenever I try to insert or update, I get the following error: The SELECT permission was denied on the object '[Table name which the view is using]', database '[database name]', schema 'dbo' Crazy as it sounds, Access seems to be trying to access the underlying table rather than the view. Any ideas?

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >