Search Results

Search found 31355 results on 1255 pages for 'google group'.

Page 604/1255 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • How to geocode a large number of addresses?

    - by user308569
    I need to geocode, i.e. translate street address to latitude,longitude for ~8,000 street addresses. I am using both Yahoo and Google geocoding engines at http://www.gpsvisualizer.com/geocoder/, and found out that for a large number of addresses those engines (one of them or both) either could not perform geocoding (i.e.return latitude=0,longitude=0), or return the wrong coordinates (incl. cases when Yahoo and Google give different results). What is the best way to handle this problem? Which engine is (usually) more accurate? I would appreciate any thoughts, suggestions, ideas from people who had previous experience with this kind of task.

    Read the article

  • Embed VLC player in GWT

    - by chrisnfoneur
    Hello, I want to embed a VLC player in my webapp build with Google's GWT. First I had a look at this page: http://wiki.videolan.org/GWT, which offers a nice solution but I add to implements all javascript functions calls (play, stop, fullscreen) with JSNI. Then I found gwt-player (hosted by Google code) which does all the job for me but the annoying part is that the project is not widely used (few posts each month on the project's group, not so many talks about it in blogs/forums...) Do you know another option to easly embed & control a VLC player in a GWT app ? My main goal is to play any video/audio file in a webapp and offer the user a fast/forward feature (set rate in VLC), is there any other player I could use ? I already had a look at Quicktime, Windows Media player & Flowplayer, none of them offers as much features as VLC. Thanks in advance & have a nice new year's eve. Chris

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • microphone volume control from javascript

    - by paleozogt
    I'd like to be able to control the system microphone volume from the browser. I know that the microphone can be recorded from using Flash or Silverlight, but these don't allow control of the microphone volume. (Flash has Microphone.gain, but as its just a software multiplier, it doesn't help when the system mic volume is muted or too loud.) I've heard that HTML5 will have some sort of microphone access, but whether it will allow volume control is unclear to me. At any rate, I don't think any browsers support it yet. Are there any plugins that would allow volume control? The old Google Gears project has some AudioApi docs, though these don't seem to have made it into the actual plugin. There's also the Google Talk plugin- it seems to do some kind of gain control, but its unclear if there's an api to the plugin. Perhaps there's a draft HTML5 implementation plugin for microphone access (like indexeddb, for example)?

    Read the article

  • Sending persisted JDO instances over GWT-RPC

    - by Ben Daniel
    I've just started learning Google Web Toolkit and finished writing the Stock Watcher tutorial app. Is my thinking correct that if one wants to persist a business object (like a Stock) using JDO and send it back and forth to/from the client over RPC then one has to create two separate classes for that object: One with the JDO annotations for persisting it on the server and another which is serialisable and used over RPC? I notice the Stock Watcher has separate classes and I can theorise why: Otherwise the gwt compiler would try to generate javascript for everything the persisted class referenced like JDO and com.google.blah.users.User, etc Also there may be logic on the server-side class which doesn't apply to the client and vice-versa. I just want to make sure I'm understanding this correctly. I don't want to have to create two versions of all my business object classes which I want to use over RPC if I don't have to.

    Read the article

  • How to get the data for intra-day candlestick charts for stocks on eg Nasdaq

    - by Chris
    Hi, For a learning exercise, i'm wanting to create candlestick (stock) graphs for stocks using zedgraph. Now on google finance, i can get daily open-high-low-close data which is perfect for making these graphs, but i'm wanting to create intra-day graphs, eg open-high-low-close data for an hour (or 5 mins, or 1 min even). Is there any way to get that kind of data without having to subscribe to any expensive service? I've heard opentick mentioned in an old SO question, but their site is defunct now. I was thinking of polling google finance once a minute to get the latest stock price, then with an hour's worth of 60 prices, i could then roughly calculate the open-high-low-close, but this is a bit of an estimation and i'm open to other suggestions. Cheers all.

    Read the article

  • Using GoogleMaps with JXMapViewer

    - by npinti
    I have been searching on the web to see if I can use GoogleMaps with the JXMapViewer. According to this, it is illegal, but the article is more than three years old. Could anyone be kind enough to tell me if I can use GoogleMaps with the JXMap viewer? I know that Google has recently allowed desktop applications to use their static maps provided that the application is freely accessible to people on some website. If this can be done, I would appreciate some pointer to where I could start looking so that I can use Google Maps, I tried messing around with this but to no avail. Thanks in advance.

    Read the article

  • Missing Menu Bar in Visual Studio 2010

    - by Jeremy Sullivan
    When I opened up Visual Studio 2010 this morning, my Menu Bar (you know, the bar with File, Edit, etc. on it) was missing. I've tried all of the right-click menus, customize options, Function keys and Google searches that I can think of but to no avail. This is exactly the kind of humiliation that brings sobriety to my ever-growing ego as a developer. I'm actually posting this on StackOverflow.com for the world to see. Next, I will be just as humiliated when someone replies to this and tells me precisely how simple it is and how ignorant that I am for not knowing the simple key combination or finding it on Google. Thank you, your majesty, for your sharp retort in advance.

    Read the article

  • How to detect flash based music player is done with playing music

    - by rsapru
    Hi, I am trying to create a very basic\ minimalistic playlist application for http://www.google.co.in/music. Basically Google music displays links to song and when clicked it opens flash based player. Using my application i am collecting music links into a list box, when an song is clicked it opens the link in a separate webbrowser and it starts playing. As of now to achieve playlist functionality i also collect the time for each song and based on that i create a timer, once the time is over it switches to other song. My Question here is that, is it possible to detect when the flash player is done playing music. Currently the timer functionality that i have built is not reliable. If interested you can have a look at code http://gmp.codeplex.com/

    Read the article

  • Are tile overlays possible with the iPhone's MapKit

    - by rickharrison
    I already have a tile source set up for use with the Google Maps JavaScript API. I am trying to translate this for use with the iPhone MapKit. I have correctly implemented the javascript zooming levels into mapkit. Whenever - (void)mapView:(MKMapView *)mapView regionDidChangeAnimated:(BOOL)animated is called, I snap the region to the nearest zoom level based on the same center point. Is it possible to implement a solution possibly with CATiledLayer to implement a tiling solution. Does the iPhone use the standard 256x256 tiles like google maps does natively? Any direction or help on this would be greatly appreciated. I would rather not waste a couple weeks trying to implement this if it's not possible.

    Read the article

  • Clickable link in RadGrid column

    - by brainimus
    I have a RadGrid where a column in the grid holds a URL. When a put a value in the column I can see the URL but the URL is not clickable (to go to the URL). How can I make the URL clickable? Here is a rough example of what I'm doing now: DataTable table = new DataTable(); DataRow row = table.Rows[0]; row["URL"] = "http://www.google.com"; grid.DataSource = table; In addition I'd really like to show specific text instead of the URL. Something similar to <a href="http://www.google.com">Link</a> in HTML. Is there anyway to do this?

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • gmaps4rails version 2 build method

    - by BrainLikeADullPencil
    I installed gmaps4rails for my Rails app, ran the generator, and required the two files like this in my application.js file, along with underscore.js //= require underscore //= require gmaps4rails/gmaps4rails.base //= require gmaps4rails/gmaps4rails.googlemaps adding, as instructed on github https://github.com/apneadiving/Google-Maps-for-Rails, these dependencies in layout.html.erb When I tried to create the demo map from the github page with this code, I got an error that the object doesn't have a build method. Uncaught TypeError: Object # has no method 'build' handler = Gmaps.build('Google'); handler.buildMap({ provider: {}, internal: {id: 'map'}}, function(){ markers = handler.addMarkers([ { "lat": 0, "lng": 0, "picture": { "url": "https://addons.cdn.mozilla.net/img/uploads/addon_icons/13/13028-64.png", "width": 36, "height": 36 }, "infowindow": "hello!" } ]); handler.bounds.extendWith(markers); handler.fitMapToBounds(); }); Indeed, when I look inside the base file that I require in the manifest file there is no build method for that object. How does one create a map in the new version for gmaps4rails?

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Android Image Orientation Issue on Motorola Droid

    - by roundhill
    Hi there, Our app uses the gallery pick action to grab an image from the device to upload to a new blog post. We're seeing on the Moto Droid that images taken in portrait are being sent back to the app in landscape orientation so the image is sideways. AFAIK this only occurs on the Droid. Found this via google, but we need the full size image to be uploaded in the correct orientation so the solution doesn't work for us: http://groups.google.com/group/android-developers/browse_frm/thread/1246475fd4c3fdb6?pli=1 An easy way to reproduce this is to take a picture in portrait on the Droid, then send it to yourself via Gmail. In the email message, the image will be in landscape (sideways). I've tested on the droid 2.1 update and the issue is still there.

    Read the article

  • SEO: Duplicated URLs with and without dash "/" and ASP.NET MVC

    - by Guillermo Guerini
    Hello guys, after reading this article "Slash or not to slash" (link: http://googlewebmastercentral.blogspot.com/2010/04/to-slash-or-not-to-slash.html) on Google Webmaster Central Blog (the oficial one) I decided to test my ASP.NET MVC app. For example: http://domain.com/products and http://domain.com/products/ (with "/" in the end), return the code 200, which means: Google understands it as two different links and likely to be a "duplicated content". They suggest to choose the way you want... with or without dash and create a 301 permanent redirect to the preferred way. So if I choose without dash, when I try to access http://domain.com/products/ it will return a 301 to the link without dash: http://domain.com/products. The question is, how can I do that with ASP.NET MVC? Thanks, Gui

    Read the article

  • Are there free realtime financial data feeds since the demise of OpenQuant?

    - by Mel Cooper
    Now that the oligopole of market data providers successfully killed OpenQuant, does any alternative to proprietary and expensive subscriptions for realtime market data subsist? Ideally I would like to be able to monitor tick by tick securities from the NYSE, NASDAQ and AMEX (about 6000 symbols). Most vendors put a limit of 500 symbols watchable at the same time, this is unacceptable to me, even if one can imagine a rotation among the 500 symbols ie. making windows of 5 sec. of effective observation out of each minute for every symbol. Currently I'm doing this by a Java thread pool calling Google Finance, but this is unsatisfactory for several reasons, one being that Google doesn't return the volume traded. Any hint much appreciated, Cheers

    Read the article

  • Make Custom Project template in Eclipse IDE

    - by Mohit Deshpande
    I have been using Eclipse IDE for a long time. Its a really great IDE for Java/C/C++ (and other languages with its THOUSANDS of plugins). Every once in a while, I get the need for creating a Javax interface. To do this normally, I would setup the new java project then add what I need. But, wouldn't it be nice if I could just make a template project to automatically include the code for the files. How would I go about doing this? It it even possible? The Eclipse CDT can make a new project type. So can the Google ADT and Google App engine. So I would imagine it is possible. But how?

    Read the article

  • wxWidgets: How to initialize wxApp without using macros and without entering the main application l

    - by m_pGladiator
    We need to write unit tests for a wxWidgets application using Google Test Framework. The problem is that wxWidgets uses the macro IMPLEMENT_APP(MyApp) to initialize and enter the application main loop. This macro creates several functions including int main(). The google test framework also uses macro definitions for each test. One of the problems is that it is not possible to call the wxWidgets macro from within the test macro, because the first one creates functions.. So, we found that we could replace the macro with the following code: wxApp* pApp = new MyApp(); wxApp::SetInstance(pApp); wxEntry(argc, argv); That's a good replacement, but wxEntry() call enters the original application loop. If we don't call wxEntry() there are still some parts of the application not initialized. The question is how to initialize everything required for a wxApp to run, without actually running it, so we are able to unit test portions of it?

    Read the article

  • GWT - How to define a Widget outside layout hierarchy in uibinder xml file

    - by mr_room
    Hello, this is my first post. I hope someone could help me. I'm looking for a way to define a widget in UiBinder XML layout file separately, without being part of the layout hierachy. Here's a small example: <ui:UiBinder xmlns:ui="urn:ui:com.google.gwt.uibinder" xmlns:g="urn:import:com.google.gwt.user.client.ui"> <g:Label ui:field="testlabel" text="Hallo" /> <g:HTMLPanel> ... </g:HTMLPanel> The compile fails since the ui:UiBinder element expects only one child element. In Java Code i will access and bind the Label widget as usual: @UiField Label testlabel; For example, this could be useful when you define a Grid or FlexTable - i want to define the Labels for the table header within the XML layout file, not programmatically within the code. Many thanks in advance

    Read the article

  • How to implement SAML SSO

    - by A_M
    How is SAML SSO typically implemented? I've read this about using SAML with Google Apps, and the wikipedia entry on SAML. The wikipedia entry talks about responding with forms containing details of the SAMLRequest and SAMLResponse. Does this mean that the user has to physically submit the form in order to proceed with the single sign on? The google entry talks about using redirects, which seems more seemless to me. However, it also talks about using a form for the response which the user must submit (although it does talk about using JavaScript to automatically submit the form). Is this the standard way of doing this? Using redirects and JavaScript for form submission? Does anyone know of any other good resources about how to go about implementing SSO between a Windows Domain and a J2EE web application. The web application is on a separate network/domain. My client wants to use CA Siteminder (with SAML).

    Read the article

  • How to geocoding big number of addresses?

    - by user308569
    I need to geocode, i.e. translate street address to latitude,longitude for ~8,000 street addresses. I am using both Yahoo and Google geocoding engines at http://www.gpsvisualizer.com/geocoder/, and found out that for a big number of addresses those engines (one of them or both) either could not perform geocoding (i.e.return latitude=0,longitude=0), or return wrong coordinates (incl. cases when Yahoo and Google give different results). What is the best way to handle this problem? Which engine is (usually) more accurate? I would appreciate any thoughts, suggestions, ideas from people who had previous experience with this kind of task.

    Read the article

  • Where is a good place for a code review?

    - by Carlos Nunez
    Hi, all! A few colleagues and I created a simple packet capturing application based on libpcap, GTK+ and sqlite as a project for a Networks Engineering course at our university. While it (mostly) works, I am trying to improve my programming skills and would appreciate it if members of the community could look at what we've put together. Is this a good place to ask for such a review? If not, what are good sites I can throw this question up on? The source code is hosted by Google Code (http://code.google.com/p/nbfm-sniffer) and an executable is available for download (Windows only, though it does compile on Linux and should compile on OS X Leopard as well provided one has gtk+ SDK installed). Thanks, everyone! -Carlos Nunez

    Read the article

  • How to use V8's built in functions

    - by Victor jiang
    I'm new in both javascript and V8. According to Google's Embedder's Guide, I saw something in the context section talking about built-in utility javascript functions. And I also found some .js files(e.g. math.js) in the downloaded source code, so I tried to write a simple program to call functions in these files, but I failed. Does a context created by Persistent<Context> context = Context::New() have any built-in js functions? How can I access them? Is there a way to first import existing js files as a library(something like src="xxx" type="text/javascript" in HTML page) and then run my own execute script? Can I call google maps api through the embedded V8 library in app? How?

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >