Search Results

Search found 18331 results on 734 pages for 'blog engine'.

Page 267/734 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • New qeep app for Java ME feature phones: meet qeepy people

    - by hinkmond
    Is it "qeepy" if you meet people by using your cell phone instead of, you know, talking to them? Nah. Not if it's a Java ME cell phone! See: Use Qeep to Meet Peeps Here's a quote: Qeep is a free app, and compatible with over 1,000 Java-enabled feature phones... ... Qeep is one of the world's largest mobile gaming and social discovery platforms. Members of the mobile community can play live multiplayer games; blog photos; send sound attacks, text messages and virtual gifts; and meet new friends worldwide. So, go on. Go, use Qeep on your Java ME feature phone to play multiplayer games, blog photos, and meet new friends worldwide. No one will think that you're weird... Not much, at least. Hinkmond

    Read the article

  • Subdividing a polygon into boxes of varying size

    - by Michael Trouw
    I would like to be pointed to information / resources for creating algorithms like the one illustrated on this blog, which is a subdivision of a polygon (in my case a voronoi cell) into several boxes of varying size: http://procworld.blogspot.nl/2011/07/city-lots.html In the comments a paper by among others the author of the blog can be found, however the only formula listed is about candidate location suitability: http://www.groenewegen.de/delft/thesis-final/ProceduralCityLayoutGeneration-Preprint.pdf Any language will do, but if examples can be given Javascript is preferred (as it is the language i am currently working with) A similar question is this one: What is an efficient packing algorithm for packing rectangles into a polygon?

    Read the article

  • Automated backups for Windows Azure SQL Database

    - by Greg Low
    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database. What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

    Read the article

  • Presenting Loading Data Warehouse Partitions with SSIS 2012 at SQL Saturday DC!

    - by andyleonard
    Join Darryll Petrancuri and me as we present Loading Data Warehouse Partitions with SSIS 2012 Saturday 8 Dec 2012 at SQL Saturday 173 in DC ! SQL Server 2012 table partitions offer powerful Big Data solutions to the Data Warehouse ETL Developer. In this presentation, Darryll Petrancuri and Andy Leonard demonstrate one approach to loading partitioned tables and managing the partitions using SSIS 2012, and reporting partition metrics using SSRS 2012. Objectives A practical solution for loading Big...(read more)

    Read the article

  • WUXGA revisited

    - by John Paul Cook
    I previously blogged about my search for a 17” 1920x1200 laptop. The only one I could find was a 17” MacBook Pro, which has been an excellent machine for running Windows and SQL Server. It is no longer made. Apple has a few refurbished ones available. Just be sure to get a matte display if you buy one. If you want WUXGA resolution or better in a laptop, your only off the shelf option is now the 15” MacBook Pro with the Retina display, which is 2880x1800. This exceeds the resolution of my 30” 2560x1600...(read more)

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • 80% off for SQL Azure!

    - by Hugo Kornelis
    I have spent the last three days at SQLBits X in London – a truly great experience! There were lots of quality sessions, but I also enjoyed meeting new people and catching up with old friends. One of these friends (and I hope he’s still a friend after I post this) is Buck Woody . Not only a great and humorous speaker, but also a very nice fellow – for those who don’t mind being teased every now and then. When we were chatting, he told me that he was planning to announce a special access code to allow...(read more)

    Read the article

  • Make a Geeky Lego Key Holder for Your Home [Quick DIY Project]

    - by Asian Angel
    LEGOs are terrific fun to work with whether you are in a playful mood or working on a personal geeky project. With that in mind the Mini-eco blog has an quick and easy tutorial for making an awesome LEGO key holder for your home or office. The best part about this project is the amount of personalization in colors and/or themes (i.e. Star Wars, Indiana Jones, etc.) that you can bring to it. To get started just visit the blog post linked below… DIY Lego Key Holder [via BoingBoing] How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • AndEngine Sprite position

    - by Kirill Kulakov
    I had noticed the AndEngine a few days ago, and I tried to create basic game with a few Sprites.Sure, the engine makes the development process much more easier.However I found the sprite lacking a major functionally: Whenever there is a need to refer to the position of an Sprite the engine manipulates the position based on the top-left corner of sprite,this is not the best thing because there is a need to subtract/add the width/height to its position in order to refer to its center.However when we refer to he scale of the Sprite it scaled according to its center point (which is great) I find it very confusing to refer to its position each time differently.I solved that by extending the Sprite class and implementing my methods of setCenter and getCenter, I guess that not the best way to do so. Do you have any suggestion?

    Read the article

  • I owe you an explanation

    - by Blueberry Coder
    Welcome to my blog! I am Frédéric Desbiens, a new member of the ADF Product Management team.  I joined Oracle only a few weeks ago. My boss is Grant Ronald, and I have the privilege to work in the same team as Susan Duncan, Frank Nimphius, Lynn Munsinger and Chris Muir. I share with them a passion for all things Java and ADF. With this blog, I hope to help you be more successful with our products – whether you are a customer or a partner. You may have heard of me before. Maybe you have my book in your bookshelf; or maybe we met at a conference. I went to JavaOne, ODTUG Kaleidoscope and Oracle OpenWorld in the past, when I worked for a major consulting firm. I will spare you all the details of my career; you can have a look at my LinkedIn profile if you are curious about my past.  Usually, my posts will be of a technical nature, and will focus on Oracle ADF and Oracle JDeveloper. SOA and portals have always been two topics of interest for me, however, and I will write about them. Over time, you will probably get acquainted with my « strategic » side as well. I devour history books, and always had a tendency to look at the big picture. I will probably not resist to the temptation of mixing IT and history, but this will be occasional, I promise!  At this point, I owe you an explanation about the title of the blog. I am French-Canadian, and wanted to evoke my roots in an obvious yet unobtrusive way. I was born in Chicoutimi, which is one of the main cities found in the Saguenay-Lac-Saint-Jean region. Traditionally, a large part of the wild blueberry production of the province of Québec come from there. A common nickname for the inhabitants is thus Les Bleuets, « The Blueberries » in English. I hope to see you around. You can also follow me on Twitter under  @BlueberryCoder.

    Read the article

  • Enjoy Portland SQL Saturday without me

    - by merrillaldrich
    I was incredibly psyched to go to SQL Saturday #27 in Portland, but alas Sunday is my older son Will's birthday, and I can't manage both events in the same weekend. Chalk it up to work-life balance. Anyway, if you are going, have a great time! And maybe I'll see you in Redmond on June 12. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • SQLU Professional Development Week: The Difference Between Your Business and Community Presence

    - by andyleonard
    Introduction Proto-earth , dinosaurs , and the days when your personal and business profiles were separate and distinct. What do these things all have in common? They are all in the past. Background Checks Corporate background checks now routinely include a search for social media profiles, forum posts, and blogs. As professionals are learning, the things you say and do in your "off-work hours" can and will be used against you - even after you're hired and have been doing the job awhile . One point?...(read more)

    Read the article

  • Slowly Changing Dimensions handling in PowerPivot (and BISM?)

    - by Marco Russo (SQLBI)
    During the PowerPivot Workshop in London we received many interesting questions and Alberto had the inspiration to write this nice post about Slowly Changing Dimensions handling in PowerPivot. It is interesting the consideration about SCD Type I attributes in a SCD Type II dimension – you can probably generate them in a more dynamic way in PowerPivot (thanks to Vertipaq and DAX) instead of relying on a relational table containing all the data you need, which usually requires a more complex ETL process....(read more)

    Read the article

  • Which can I use to make GTK themes?

    - by tutuca
    I'm trying to make a new gtk theme using the murrine engine, using Humanity (default in ubuntu 9.10) as a template. You can grab the code in http://github.com/tutuca/themes However, I found cumbersome the process of creating a new theme with it. There is no central starting point. The documentation of both, the engine options (gtkrc's and stuff), and general theming practices (the format of the index.theme files, folders, bla bla) is scarce, How to's and tutorials are often old or subject to lots of opinionated debate and results confusing (to me, having a web developer background, at least :-). So... I wanted to ask to the fellows gtk themers and artist out there: Which tools you use to create a new theme, and how does your average workflow looks like?

    Read the article

  • Free Webinar - Using Enterprise Data Integration Dashboards

    - by andyleonard
    Join Kent Bradshaw and me as we present Using Enterprise Data Integration Dashboards Tuesday 11 Dec 2012 at 10:00 AM ET! If data is the life of the modern organization, data integration is the heart of an enterprise. Data circulation is vital. Data integration dashboards provide enterprise ETL (Extract, Transform, and Load) teams near-real-time status supported with historical performance analysis. Join Linchpins Kent Bradshaw and Andy Leonard as they demonstrate and discuss the benefits of data...(read more)

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • 24 Hours of PASS (September 2014): Summit Preview Edition

    - by Sergio Govoni
    Which sessions you can expect to find at the next PASS Summit 2014 ? Find it out on September 09, 2014 (12:00 GMT) at the free online event: 24 Hours of PASS: Summit Preview Edition.Register now at this link.No matter from what part of the world you will follow the event, the important thing is to know that they will be 24 hours of continuous training on SQL Server and Business Intelligence on your computer!

    Read the article

  • SSISDB Analysis Script on Gist

    - by Davide Mauri
    I've created two simple, yet very useful, script to extract some useful data to quickly monitor SSIS packages execution in SQL Server 2012 and after.get-ssis-execution-status  get-ssis-data-pumped-rows  I've started to use gist since it comes very handy, for this "quick'n'dirty" scripts and snippets, and you can find the above scripts and others (hopefully the number will increase over time...I plan to use gist to store all the code snippet I used to store in a dedicated folder on my machine) there.Now, back to the aforementioned scripts. The first one ("get-ssis-execution-status") returns a list of all executed and executing packages along with latest successful and running executions (so that on can have an idea of the expected run time)error messageswarning messages related to duplicate rows found in lookupsthe second one ("get-ssis-data-pumped-rows") returns information on DataFlows status. Here there's something interesting, IMHO. Nothing exceptional, let it be clear, but nonetheless useful: the script extract information on destinations and row sent to destinations right from the messages produced by the DataFlow component. This helps to quickly understand how many rows as been sent and where...without having to increase the logging level.Enjoy! PSI haven't tested it with SQL Server 2014, but AFAIK they should work without problems. Of course any feedback on this is welcome. 

    Read the article

  • Geek City: What gets logged for SELECT INTO operations?

    - by Kalen Delaney
    Last week, I wrote about logging for index rebuild operations. I wanted to publish the result of that testing as soon as I could, because that dealt with a specific question I was trying to answer. However, I actually started out my testing by looking at the logging that was done for a different operation, and ending up generating some new questions for myself. Before I starting testing the index rebuilds, I thought I would just get warmed up by observing the logging for SELECT INTO. I thought I...(read more)

    Read the article

  • When a problem is resolved

    - by Rob Farley
    This month’s T-SQL Tuesday is hosted by Jen McCown, and she’s picked the topic of Resolutions. It’s a new year, and she’s thinking about what people have resolved to do this year. Unfortunately, I’ve never really done resolutions like that. I see too many people resolve to quit smoking, or lose weight, or whatever, and fail miserably. I’m not saying I don’t set goals, but it’s not a thing for New Year. The obvious joke is “1920x1080” as a resolution, but I’m not going there. I think Resolving is a strange word. It makes it sound like I’m having to solve a problem a second time, when actually, it’s more along the lines of solving a problem well enough for it to count as finished. If something has been resolved, a solution has been provided. There is a resolution, through the provision of a solution. It’s a strangeness of English. When I look up the word resolution at dictionary.com, it has 12 options, including “settling of a problem”. There’s a finality about resolution. If you resolve to do something, you’re saying “Yes. This is a done thing. I’m resolving to do it, which means that it may as well be complete already.” I like to think I resolve problems, rather than just solving them. I want my solving to be final and complete. If I tune a query, I don’t want to find that I’m back in there, re-tuning it at some point. Strangely, if I re-solve a problem, that implies that I didn’t resolve it in the first place. I only solved it. Temporarily. We “data-folk” live in a world where the most common answer is “It depends.” Frustratingly, the thing an answer depends on may still be changing in the system in question. That probably means that any solution that is put in place may need reinvestigating at some point later. So do I resolve things? Yes. Am I Chuck Norris, and solve things so well the world would break first? No. Do these two claims happily sit beside each other? No, unfortunately not. But I happily take responsibility for things, and let my clients depend on me to see it through. As far as they are concerned, it is resolved. And so I resolve to keep resolving, right through 2011.

    Read the article

  • Parallel Data Warehouse

    - by jchang
    The Microsoft Parallel Data Warehouse diagram was somewhat difficult to understand in terms of the functionality of each subsystem in relation to the configuration of its components. So now that HP has provided a detailed list of the PDW components , the diagram below shows the PDW subsystems with component configuration (InfiniBand, FC, and network connections not shown). Observe that there are three different ProLiant server models, the DL360 G7, DL370 G6 and the DL380 G7, in five different configurations...(read more)

    Read the article

  • T-SQL Tuesday #34: Help me, Obi-Wan Kenobi, You're My Only Hope!

    - by AllenMWhite
    This T-SQL Tuesday is about a person that helped you understand SQL Server. It's not a stretch to say that it's people that help you get to where you are in life, and Rob Volk ( @sql_r ) is sponsoring this month's T-SQL Tuesday asking who is that person that helped you get there. Over the years, there've been a number of people who've helped me, but one person stands out above the rest, who was patient, kind and always explained the details in a way that just made sense. I first met Don Vilen at...(read more)

    Read the article

  • Meet @marcorus and @ferrarialberto at TechEd Europe 2012 #tee2012

    - by Marco Russo (SQLBI)
    I and Alberto are in Amsterdam this week at TechEd Europe 2012. If you are here at the conference, you can meet us here: Wed, Jun 27 10:15 AM - 11:30 AM – Room G106 DBI319 - BISM: Multidimensional vs. Tabular Wed, Jun 27 02:15 PM – 02:30 PM – Microsoft Press Booth in the TechExpo area PowerPivot for Excel 2010 Book Signing Thu, Jun 28 8:30 AM - 9:45 AM – Room E107 Many-to-Many Relationships in BISM Tabular Fri, Jun 29 1:00 PM - 2:45 PM – Breakthrough Insight at Microsoft SQL Server Booth – TechExpo area Staff and Q&A We’ll try to visit the Microsoft Booth very often and we’ll be in the area Breakthrough Insight of SQL Server zone (see the picture to identify it). And don’t miss the PowerPivot for Excel 2010 book signing event:

    Read the article

  • Will having a website duplicated on multiple top level domains be penalised by search engines [duplicate]

    - by user1020317
    This question already has an answer here: Will having multiple domains improve my seo? 7 answers I'm running a website for a global company, and although we rank first in search engine results here in Ireland, a search done from other countries doesn't rank us as highly. If I register the domain at other top level domain names (eg. example.co.uk, example.nor etc.) and then just mirror the .com site to those other domains, will I be penalised by search engines for having duplicate content? Has anyone else faced a similar problem and found a way to capture the global search engine? Thanks.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >