Search Results

Search found 6186 results on 248 pages for 'arrow keys'.

Page 149/248 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • SharePoint 2010 Hosting - ASPHostPortal :: Installing SSRS 2008 R2 on SharePoint 2010

    - by mbridge
    What do you need first? Please download SQL Server® 2008 R2 November CTP Reporting Services Add-in for Microsoft SharePoint® Technologies 2010 and please follow this steps: 1. Install a SharePoint technology instance. (Already did this when installing PowerPivot with SharePoint) 2. Install SQL Server 2008 R2 November CTP Reporting Services and specify that the report server use SharePoint Integrated mode 3. Configure Reporting Services 4. Download the Reporting Services Add-in by clicking the rsSharePoint.msi link later on this page. To start the installation immediately, click Run After installing Reporting services and the add-in your reporting server is ready to be integrated with SharePoint, in SharePoint 2010 we have some new admin screens. To integrate go to central admin, general application settings: When you successfully installed the add-in a reporting services icon will be there. Click Reporting Services Integration: Add the report server web service url (To get the URL, open the Reporting Services Configuration tool, connect to the report server, and click Web Service URL. Click the URL to verify it works. Copy the URL and paste it into Report Server Web Service URL.), select your authentication mode (windows authentication is prefered). Add a username and password of your admin account. Click ok to configure and start the integration. After the installation you can set the reporting services default. What is changed in SP2010 is that there isn’t a report library available. You have to add content types to a default library. So go to a site collection, site actions, View all site content. Create a Asset library: Now we have to make sure we can add reports to the library. To do this we have to add content types: Open the library, click on library tools, library settings, Under Content Types, click Add from existing site content types. In the Select Content Types section, in Select site content types from, click the arrow to select Reporting Services. In the Available Site Content Types list, click Report Builder, Report Data Source and Report and then click Add to move the selected content type to the Content types to add list. Now we are ready to upload reports and execute them from within our webparts: Another interesting post: - Integrating SharePoint 2010 and SQL 2008 R2

    Read the article

  • Collide with rotation of the object

    - by Lahiru
    I'm developing a mirror for lazer beam(Ball sprite). There I'm trying to redirect the laze beam according to the ration degree of the mirror(Rectangle). How can I collide the ball to the correct angle if the colliding object is with some angle(45 deg) rather than colliding back. here is an screen shot of my work here is my code using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace collision { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D ballTexture; Rectangle ballBounds; Vector2 ballPosition; Vector2 ballVelocity; float ballSpeed = 30f; Texture2D blockTexture; Rectangle blockBounds; Vector2 blockPosition; private Vector2 origin; KeyboardState keyboardState; //Font SpriteFont Font1; Vector2 FontPos; private String displayText; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here ballPosition = new Vector2(this.GraphicsDevice.Viewport.Width / 2, this.GraphicsDevice.Viewport.Height * 0.25f); blockPosition = new Vector2(this.GraphicsDevice.Viewport.Width / 2, this.GraphicsDevice.Viewport.Height /2); ballVelocity = new Vector2(0, 1); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); ballTexture = Content.Load<Texture2D>("ball"); blockTexture = Content.Load<Texture2D>("mirror"); //create rectangles based off the size of the textures ballBounds = new Rectangle((int)(ballPosition.X - ballTexture.Width / 2), (int)(ballPosition.Y - ballTexture.Height / 2), ballTexture.Width, ballTexture.Height); blockBounds = new Rectangle((int)(blockPosition.X - blockTexture.Width / 2), (int)(blockPosition.Y - blockTexture.Height / 2), blockTexture.Width, blockTexture.Height); origin.X = blockTexture.Width / 2; origin.Y = blockTexture.Height / 2; // TODO: use this.Content to load your game content here Font1 = Content.Load<SpriteFont>("SpriteFont1"); FontPos = new Vector2(graphics.GraphicsDevice.Viewport.Width - 100, 20); } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> /// private float RotationAngle; float circle = MathHelper.Pi * 2; float angle; protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here //check for collision between the ball and the block, or if the ball is outside the bounds of the screen if (ballBounds.Intersects(blockBounds) || !GraphicsDevice.Viewport.Bounds.Contains(ballBounds)) { //we have a simple collision! //if it has hit, swap the direction of the ball, and update it's position ballVelocity = -ballVelocity; ballPosition += ballVelocity * ballSpeed; } else { //move the ball a bit ballPosition += ballVelocity * ballSpeed; } //update bounding boxes ballBounds.X = (int)ballPosition.X; ballBounds.Y = (int)ballPosition.Y; blockBounds.X = (int)blockPosition.X; blockBounds.Y = (int)blockPosition.Y; keyboardState = Keyboard.GetState(); float val = 1.568017f/90; if (keyboardState.IsKeyDown(Keys.Space)) RotationAngle = RotationAngle + (float)Math.PI; if (keyboardState.IsKeyDown(Keys.Left)) RotationAngle = RotationAngle - val; angle = (float)Math.PI / 4.0f; // 90 degrees RotationAngle = angle; // RotationAngle = RotationAngle % circle; displayText = RotationAngle.ToString(); base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here spriteBatch.Begin(); // Find the center of the string Vector2 FontOrigin = Font1.MeasureString(displayText) / 2; spriteBatch.DrawString(Font1, displayText, FontPos, Color.White, 0, FontOrigin, 1.0f, SpriteEffects.None, 0.5f); spriteBatch.Draw(ballTexture, ballPosition, Color.White); spriteBatch.Draw(blockTexture, blockPosition,null, Color.White, RotationAngle,origin, 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } } }

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • SQL Server script commands to check if object exists and drop it

    - by deadlydog
    Over the past couple years I’ve been keeping track of common SQL Server script commands that I use so I don’t have to constantly Google them.  Most of them are how to check if a SQL object exists before dropping it.  I thought others might find these useful to have them all in one place, so here you go: 1: --=============================== 2: -- Create a new table and add keys and constraints 3: --=============================== 4: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 5: BEGIN 6: CREATE TABLE [dbo].[TableName] 7: ( 8: [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) 9: [ColumnName2] INT NULL, 10: [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') 11: ) 12: 13: -- Add the table's primary key 14: ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED 15: ( 16: [ColumnName1], 17: [ColumnName2] 18: ) 19: 20: -- Add a foreign key constraint 21: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY 22: ( 23: [ColumnName1], 24: [ColumnName2] 25: ) 26: REFERENCES [dbo].[Table2Name] 27: ( 28: [OtherColumnName1], 29: [OtherColumnName2] 30: ) 31: 32: -- Add indexes on columns that are often used for retrieval 33: CREATE INDEX IN_ColumnNames ON [dbo].[TableName] 34: ( 35: [ColumnName2], 36: [ColumnName3] 37: ) 38: 39: -- Add a check constraint 40: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) 41: END 42: 43: --=============================== 44: -- Add a new column to an existing table 45: --=============================== 46: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 47: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 48: BEGIN 49: ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) 50: 51: -- Add a description extended property to the column to specify what its purpose is. 52: EXEC sys.sp_addextendedproperty @name=N'MS_Description', 53: @value = N'Add column comments here, describing what this column is for.' , 54: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 55: @level1name = N'TableName', @level2type=N'COLUMN', 56: @level2name = N'ColumnName' 57: END 58: 59: --=============================== 60: -- Drop a table 61: --=============================== 62: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 63: BEGIN 64: DROP TABLE [dbo].[TableName] 65: END 66: 67: --=============================== 68: -- Drop a view 69: --=============================== 70: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') 71: BEGIN 72: DROP VIEW [dbo].[ViewName] 73: END 74: 75: --=============================== 76: -- Drop a column 77: --=============================== 78: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 79: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 80: BEGIN 81: 82: -- If the column has an extended property, drop it first. 83: IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', 84: N'TableName', N'COLUMN', N'ColumnName') 85: BEGIN 86: EXEC sys.sp_dropextendedproperty @name=N'MS_Description', 87: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 88: @level1name = N'TableName', @level2type=N'COLUMN', 89: @level2name = N'ColumnName' 90: END 91: 92: ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] 93: END 94: 95: --=============================== 96: -- Drop Primary key constraint 97: --=============================== 98: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' 99: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') 100: BEGIN 101: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] 102: END 103: 104: --=============================== 105: -- Drop Foreign key constraint 106: --=============================== 107: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' 108: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') 109: BEGIN 110: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] 111: END 112: 113: --=============================== 114: -- Drop Unique key constraint 115: --=============================== 116: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 117: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') 118: BEGIN 119: ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] 120: END 121: 122: --=============================== 123: -- Drop Check constraint 124: --=============================== 125: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' 126: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') 127: BEGIN 128: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] 129: END 130: 131: --=============================== 132: -- Drop a column's Default value constraint 133: --=============================== 134: DECLARE @ConstraintName VARCHAR(100) 135: SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id 136: WHERE s.xtype='d' AND c.cdefault=s.id 137: AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') 138: 139: IF @ConstraintName IS NOT NULL 140: BEGIN 141: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 142: END 143: 144: --=============================== 145: -- Example of how to drop dynamically named Unique constraint 146: --=============================== 147: DECLARE @ConstraintName VARCHAR(100) 148: SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 149: WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 150: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') 151: 152: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 153: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) 154: BEGIN 155: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 156: END 157: 158: --=============================== 159: -- Check for and drop a temp table 160: --=============================== 161: IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName 162: 163: --=============================== 164: -- Drop a stored procedure 165: --=============================== 166: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND 167: ROUTINE_NAME = 'StoredProcedureName') 168: BEGIN 169: DROP PROCEDURE [dbo].[StoredProcedureName] 170: END 171: 172: --=============================== 173: -- Drop a UDF 174: --=============================== 175: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND 176: ROUTINE_NAME = 'UDFName') 177: BEGIN 178: DROP FUNCTION [dbo].[UDFName] 179: END 180: 181: --=============================== 182: -- Drop an Index 183: --=============================== 184: IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') 185: BEGIN 186: DROP INDEX TableName.IndexName 187: END 188: 189: --=============================== 190: -- Drop a Schema 191: --=============================== 192: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') 193: BEGIN 194: EXEC('DROP SCHEMA SchemaName') 195: END And here’s the same code, just not in the little code view window so that you don’t have to scroll it.--=============================== -- Create a new table and add keys and constraints --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN CREATE TABLE [dbo].[TableName]  ( [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) [ColumnName2] INT NULL, [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') ) -- Add the table's primary key ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED ( [ColumnName1],  [ColumnName2] ) -- Add a foreign key constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY ( [ColumnName1],  [ColumnName2] ) REFERENCES [dbo].[Table2Name]  ( [OtherColumnName1],  [OtherColumnName2] ) -- Add indexes on columns that are often used for retrieval CREATE INDEX IN_ColumnNames ON [dbo].[TableName] ( [ColumnName2], [ColumnName3] ) -- Add a check constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) END --=============================== -- Add a new column to an existing table --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) -- Add a description extended property to the column to specify what its purpose is. EXEC sys.sp_addextendedproperty @name=N'MS_Description',  @value = N'Add column comments here, describing what this column is for.' ,  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END --=============================== -- Drop a table --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN DROP TABLE [dbo].[TableName] END --=============================== -- Drop a view --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') BEGIN DROP VIEW [dbo].[ViewName] END --=============================== -- Drop a column --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN -- If the column has an extended property, drop it first. IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', N'TableName', N'COLUMN', N'ColumnName') BEGIN EXEC sys.sp_dropextendedproperty @name=N'MS_Description',  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] END --=============================== -- Drop Primary key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] END --=============================== -- Drop Foreign key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] END --=============================== -- Drop Unique key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') BEGIN ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] END --=============================== -- Drop Check constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] END --=============================== -- Drop a column's Default value constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id WHERE s.xtype='d' AND c.cdefault=s.id  AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') IF @ConstraintName IS NOT NULL BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Example of how to drop dynamically named Unique constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS  WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Check for and drop a temp table --=============================== IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName --=============================== -- Drop a stored procedure --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND ROUTINE_NAME = 'StoredProcedureName') BEGIN DROP PROCEDURE [dbo].[StoredProcedureName] END --=============================== -- Drop a UDF --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND  ROUTINE_NAME = 'UDFName') BEGIN DROP FUNCTION [dbo].[UDFName] END --=============================== -- Drop an Index --=============================== IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') BEGIN DROP INDEX TableName.IndexName END --=============================== -- Drop a Schema --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') BEGIN EXEC('DROP SCHEMA SchemaName') END

    Read the article

  • Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    - by Asian Angel
    Do you have certain folders that you access often each day but are only available through the Places Menu or Nautilus? See how easy it is to create shortcuts for your desktop and taskbar with our quick tutorial. To get started open Nautilus and locate the folders that you want to make new shortcuts for. For our example we chose Ubuntu One. Right click on the chosen folder and select Make Link. Your new shortcut will appear with the text Link to “Folder Name” and an Arrow Shortcut Marker attached. If you are happy with your new shortcut as is, then drag it to your desktop or taskbar as desired. We created the shortcut twice in our example…once for the desktop and once for the taskbar. For our example we decided to customize the taskbar shortcut a bit. To customize your shortcut right click on the shortcut and select Properties. Note: The desktop shortcut is limited on the amount you can customize it (name change and addition of up to four emblems to the folder). From here you can rename the shortcut and change the icon as desired. A quick name change and new icon made a huge improvement in how our taskbar shortcut looked. Note: The link for the icon we used is shown below. A little touch-up to our desktop shortcut and both are looking good. Download the Ubuntu Cloud Icon *Icon is 128*128 pixels and comes in .png format. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic]

    Read the article

  • Invitation til Oracle Open Experience DK11

    - by user13847369
    Kære Partnere, Vi afholder sammen med Arrow et større kundearrangement den 7. December ved navnet "Oracle Open Experience DK11". I kan se agendaen for dagen her: V. Torben Markussen, Sales director/Middleware director Nordics, Oracle Fornem stemningen fra årets OpenWorld og få præsenteret de største og mest relevante nyheder. Hør hvordan I drager konkret fordel af Oracle-nyheder som Oracle Cloud, storageløsningen Pillar Axiom og de unikke nye muligheder med Fusion Applications. V. Hans Bøge, IT-arkitekt, Oracle Hans Bøge fortæller om hvordan I optimerer jeres licensløsning, og letter administrationen af jeres database. En server fødes med adskillige cores - alle med licensomkostninger. Hvorfor ikke nøjes med at aktivere det antal der matcher jeres behov? Hør hvordan I får en løsning, der kan opgraderes hen ad vejen som behovene opstår. V. Kim Estrup, Produktchef 11G , Oracle Brugerdefinerede implementeringer gør jeres systemer unikke, hvilket kun øger komplek og administration. Kim Estrup vil fortælle alt om, hvordan I letter forvaltningen af jeres traditionelle data-centre, samt etablerer lynhurtig adgang til skyen. Få indføring i en omfattende løsning, der skærer gennem kompleksitet, øger service-kvalitet og minimerer administrations- omkostninger. V. Steen Schmidt, IT-arkitekt, Oracle Hør hvordan de nyeste teknologier indenfor virtualisering og server-management, kan forsimple jeres IT-processer og reducere omkostninger markant. Få demonstreret en løsn der er fire gange mere skalerbar end den seneste VWware, og som kan understøtte op  til 128 virtuelle CPUer per virtuel maskine - endda til en brøkdel af omkostningen. Steen Schmidt fortæller alt om mulighederne i jeres storagesystem V. Erik Lund, Storage Presales-arkitekt, Oracle Lyder det for godt til at være sandt? Det kan faktisk lade sig gøre. Erik Lund fortæller alt om de nyeste storage-teknologier, der åbner op for skalerbarhed på både disc- og controllerniveau, og for database-komprimering med op til faktor 50. Det reducerer behovet for discs og formindsker jeres backup-vindue markant. Kombineret med markedets højeste udnyttelsesgrad og omkostningsfri QoS, giver det helt nye muligheder. Dagen starter kl 8.30 med morgenmad og slutter kl 14.00 Jeg håber I har lyst til at deltage samt at invitere jeres kunder.I skulle gerne have modtaget en invitation som I også kan bruge til at sende ud til jeres kunder.Er det ikke tilfældet så send mig endeligt en mail. Du kan også tilmelde dig via dette link: http://www.woho.dk/oracle og indtaste billetnummer: 1500Mvh Thomas Stenvald

    Read the article

  • Opening cursor files in a graphics editor?

    - by sdaau
    I'm looking at /usr/share/icons/DMZ-White/cursors, and there is: $ tree -s /usr/share/icons/DMZ-White/ /usr/share/icons/DMZ-White/ +-- [ 4096] cursors ¦   +-- [ 14] 00008160000006810000408080010102 -> v_double_arrow ... ¦   +-- [ 5] 9d800788f1b08800ae810202380a0822 -> hand2 ¦   +-- [ 8] arrow -> left_ptr ¦   +-- [ 15776] bd_double_arrow ¦   +-- [ 15776] bottom_left_corner ¦   +-- [ 15776] bottom_right_corner ¦   +-- [ 15776] bottom_side ... ... a bunch of files without extension, that GIMP cannot open. Is there an editor where these files can be opened - or at least a converter to something like .png? I can note that ImageMagick display also failed to open these files... Found also Gursor Maker - Cursor Editor for X11/GTK+; got the CVS code from SourceForge - it still uses Numeric (the old name of numpy), so to run it, you'll have to do: #from Numeric import * from numpy import * ... in xcurio.py, curxp.py, gimp.py, colorfunc.py - and comment the #from xml.dom.ext.reader import Sax2 in lsproj.py. With that, I got it running 11.04: ... but cannot get any files to open? So I thought I should grep for paths, nothing much came up - and when I looked into cursordefs.py, I simply had to paste this: CURSOR_ICON = gtk.gdk.pixbuf_new_from_xpm_data([ "10 16 3 1", " c None", ". c #000000", "+ c #FFFFFF", ".. ", ".+. ", ".++. ", ".+++. ", ".++++. ", ".+++++. ", ".++++++. ", ".+++++++. ", ".++++++++.", ".+++++....", ".++.++. ", ".+. .++. ", ".. .++. ", " .++. ", " .++. ", " .. "]) Heh :) In any case, doesn't look like it will be much usable on newer Ubuntus, unfortunately... Just tested XMC plugin as well - on 11.04, has to be built from source (from the link in the accepted answer); the requirements on my system resolved to: sudo apt-get install libgimp2.0-dev libglib2.0-0-dbg libglib2.0-0-refdbg libglib2.0-cil-dev libgtk2.0-0-dbg libgtk2.0-cil-dev ... after that, the configure/make procedure in the INSTALL file works. Note that this plugin is a bit "sneaky": ... that is, you should use "All files" (as there are no extensions); cursor previews at first will not be rendered. Then open one cursor file; after it has been opened, then there is a preview in the File/Open dialog; but other than that, it works fine...

    Read the article

  • Cannot boot into system after deleting partition

    - by Clayton
    Okay...so this was kind of a stupid thing for me to do now that I think back on it. I was experiencing a ton of lag and not as much memory that I could use after installing Ubuntu 12.04. So after remembering I had installed multiple server versions of Ubuntu 12.04 by mistake, I went into Disk Management and proceeded to delete each and every one. Everything went fine. Up until this week, I have not experienced any problems. But starting yesterday I began to get lag just as I had before, and nothing fixed the problem. I decided to remove the Ubuntu partition, since I was also experiencing a visual error when given the option to select one to boot(the screen doesn't come up at all, and I recieved a monitor resolution error instead, but could still access both Windows and Ubuntu via arrow keys). After deleting the Ubuntu partition, so that I could see if running just Windows would fix the problem, I proceeded with what I was doing, installing a few programs that were not tied to my prediciment in any way. Upon rebooting my desktop, however, I recieved the following error: error: unknown filesystem. grub rescue> Hoping I could boot into Ubuntu via a pendrive and possibly backup my important files and wipe the hard drive to start fresh, I installed Ubuntu 13.04, but even that does not boot. Instead, I get this message on a terminal screen: SYSLINUX 4.06. EDD 4.06-pre1 Copyright (C) 1994-2011 H. Peter Anvin et al ERROR: No configuration file found No DEFAULT or UI configuration directive found! boot: So more or less, my desktop is screwed. I need to be able to get to the files inside because of my job as an artist, as well as retrieve my documents for my stories stored on Windows. Once I can succeed in solving this once and for all, I know for a fact I will stick to Ubuntu only, and install what is required to be able to run any Windows applications I used to use or need to use. I would rather not reformat the hard drive, and if I need to, it is a last resort. And I doubt I can use a Windows Recovery Disk to get my files back, as my mom has thrown out a lot of the installation disks and paperwork I would need to even follow through with that. :\ Keep in mind that I am a novice/newbie when it comes to Linux, but am hoping ot become better at it as time goes by. I appreciate any help you guys can give me. This will probably be the last time I attempt to do anything that could risk the well-being of my PC. (I've also looked through various questions on the site and tested a lot of the solutions. None seem to have worked.)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 2: The Service

    - by Your DisplayName here!
    OK – so let’s first start with a simple WCF service and connect that to ADFS 2 for authentication. The service itself simply echoes back the user’s claims – just so we can make sure it actually works and to see how the ADFS 2 issuance rules emit claims for the service: [ServiceContract(Namespace = "urn:leastprivilege:samples")] public interface IService {     [OperationContract]     List<ViewClaim> GetClaims(); } public class Service : IService {     public List<ViewClaim> GetClaims()     {         var id = Thread.CurrentPrincipal.Identity as IClaimsIdentity;         return (from c in id.Claims                 select new ViewClaim                 {                     ClaimType = c.ClaimType,                     Value = c.Value,                     Issuer = c.Issuer,                     OriginalIssuer = c.OriginalIssuer                 }).ToList();     } } The ViewClaim data contract is simply a DTO that holds the claim information. Next is the WCF configuration – let’s have a look step by step. First I mapped all my http based services to the federation binding. This is achieved by using .NET 4.0’s protocol mapping feature (this can be also done the 3.x way – but in that scenario all services will be federated): <protocolMapping>   <add scheme="http" binding="ws2007FederationHttpBinding" /> </protocolMapping> Next, I provide a standard configuration for the federation binding: <bindings>   <ws2007FederationHttpBinding>     <binding>       <security mode="TransportWithMessageCredential">         <message establishSecurityContext="false">           <issuerMetadata address="https://server/adfs/services/trust/mex" />         </message>       </security>     </binding>   </ws2007FederationHttpBinding> </bindings> This binding points to our ADFS 2 installation metadata endpoint. This is all that is needed for svcutil (aka “Add Service Reference”) to generate the required client configuration. I also chose mixed mode security (SSL + basic message credential) for best performance. This binding also disables session – you can control that via the establishSecurityContext setting on the binding. This has its pros and cons. Something for a separate blog post, I guess. Next, the behavior section adds support for metadata and WIF: <behaviors>   <serviceBehaviors>     <behavior>       <serviceMetadata httpsGetEnabled="true" />       <federatedServiceHostConfiguration />     </behavior>   </serviceBehaviors> </behaviors> The next step is to add the WIF specific configuration (in <microsoft.identityModel />). First we need to specify the key material that we will use to decrypt the incoming tokens. This is optional for web applications but for web services you need to protect the proof key – so this is mandatory (at least for symmetric proof keys, which is the default): <serviceCertificate>   <certificateReference storeLocation="LocalMachine"                         storeName="My"                         x509FindType="FindBySubjectDistinguishedName"                         findValue="CN=Service" /> </serviceCertificate> You also have to specify which incoming tokens you trust. This is accomplished by registering the thumbprint of the signing keys you want to accept. You get this information from the signing certificate configured in ADFS 2: <issuerNameRegistry type="...ConfigurationBasedIssuerNameRegistry">   <trustedIssuers>     <add thumbprint="d1 … db"           name="ADFS" />   </trustedIssuers> </issuerNameRegistry> The last step (promised) is to add the allowed audience URIs to the configuration – WCF clients use (by default – and we’ll come back to this) the endpoint address of the service: <audienceUris>   <add value="https://machine/soapadfs/service.svc" /> </audienceUris> OK – that’s it – now we have a basic WCF service that uses ADFS 2 for authentication. The next step will be to set-up ADFS to issue tokens for this service. Afterwards we can explore various options on how to use this service from a client. Stay tuned… (if you want to have a look at the full source code or peek at the upcoming parts – you can download the complete solution here)

    Read the article

  • CodePlex Daily Summary for Monday, September 03, 2012

    CodePlex Daily Summary for Monday, September 03, 2012Popular ReleasesMetodología General Ajustada - MGA: 03.01.03: Cambios Aury: Ajuste del margen del reporte. Visualización de la columna de Supuestos en la parte del módulo de Decisión. Cambios John: Integración de código con cambios enviados por Aury Niño. Generación de instaladores. Soporte técnico por correo electrónico y telefónico.Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...Thisismyusername's codeplex page.: HTML5 Mulititouch Fruit Ninja Proof of Concept: This is an example of how you could create a game such as Fruit Ninja using HTML5's multitouch capabilities. Sorry this example doesn't have great graphics. If I had my own webpage, I could store some graphics and upload the game there and it might look halfway decent, but since I'm only using a Codeplex page and most mobile devices can't open .zip files, the fruits are just circles. I hope you enjoy reading the source code anyway.GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.TSQL Code Smells Finder: POC 1.01: Proof of concept 1.01 TSQLDomTest.ps1 and Errors.Txt are requiredConfuser: Confuser build 76542: This is a build of changeset 76542.Reactive State Machine: ReactiveStateMachine-beta: TouchStateMachine now supports Microsoft Surface 2.0 SDK. The TouchStateMachine is an extension to the Reactive State Machine. Reactive State Machine uses NuGet for dependency managementSharePoint Column & View Permission: SharePoint Column and View Permission v1.2: Version 1.2 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerMihmojsos OS: Mihmojsos OS 3 (Smart Rabbit): !Mihmojsos OS 3 Smart Rabbit Mihmojsos Smart Rabbit is now availableDotNetNuke Translator: 01.00.00 Beta: First release of the project.YNA: YNA 0.2 alpha: Wath's new since 0.1 alpha ? A lot of changes but there are the most interresting : StateManager is now better and faster Mouse events for all YnObjects (Sprites, Images, texts) A really big improvement for YnGroup Gamepad support And the news : Tiled Map support (need refactoring) Isometric tiled map support (need refactoring) Transition effect like "FadeIn" and "FadeOut" (YnTransition) Timers (YnTimer) Path management (YnPath, need more refactoring) Downloads All downloads...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.2: fixed several issues with streaming modeUrlPager: UrlPager 1.2: Fixed bug in which url parameters will lost after paging; ????????url???bug;Sofire Suite: Sofire v1.5.0.0: Sofire v1.5.0.0 ?? ???????? ?????: 1、?? 2、????EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...New ProjectsBPVote4PPT: BPVote For PowerPointCosmo OS: La semplicità in un OSFinancial Analytic Tools: C#.Net Financial Analytic ToolsGeminiMVC: An Open Source CMS written in ASP.net MVC 4 with speed, extensibility, and ease-of-us in mind.JQuery SharePoint Autocomplete People Picker: This JQUery bundle provides an autocomplete people picker based on SharePoint profiles. It can be hosted on the SharePoint itself or on remote applications.Kerbal Space Program PartModule Library: This project is designed to add various functionalities to custom parts for the space program simulation game Kerbal Space Program.KeyboardRemapper: This tool to remaps keys in the keyboard. If you have more than one keyboard or an additional keypad, you can remap the keys of the each keyboard independentlyKHStudent: ??????Localized DataAnnotations with T4 templates: Simplified DataAnnotations localization using T4 templates.MfcLightToolkit: Supports development for small and simple MFC application. Provides asynchronous programming model like .NET, file download, easy control resizing, and so on.Müslüm ÖZTÜRK Code Lib: Test amaçli olusturulan projemdirPolska: Testproject in how a polish grammerprogram can look like.QueueLessApp: Here is the codeRusIS.CMS: aaaSGPS: Projeto de controle de produtos e serviçosStemmersNet: Stemmers pack for .Net FrameworkTrabajo Final de Ingenieria - Javier Vallejos: Tesis Final de la carrera de Ingenieria - Universidad Abierta Interamericana.TSQL Code Smells Finder: TSQL 'smells' findersXNA and Data Driven Design: This project includes links for XNA and Data Driven DesignXNA and System Testing: This project includes code for XNA and System TestingYUGI-AR Project: an open source project for yugioh based augmented reality???????? ? ?????????????: ???? ??????? ??????? ?????????????? ??????????? ?????????? ??? ? ????? ?????? ? ? ??? ??? ????? ? ??? ?????????? ????????????.

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Ubuntu hangs on booting up after a update

    - by alFReD NSH
    I've made a clean install yesterday, for the first time restarted, everything went good and then after I updated packages and copied my old home directory to replace the new one, when I restarted it hung when it was booting. I tried reinstalling again and doing the same thing, but again same thing happened. Here's what I see, before when the Ubuntu logo with the five dots is shown: Then after that, 3 or 4 of the dots will load and hangs there. If I press arrow up before that, this will be shown I started my laptop again today(the pictures are for the day before) and after that, boot up with live CD and got the logs. dmesg: http://pastebin.com/aVxV7BQF syslog: http://pastebin.com/4E2BrRUK And some info: alfred@alFitop:~$ uname -a Linux alFitop 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lshw: http://pastebin.com/AZbKJmsT sources.list : http://pastebin.com/2HazmuyV My problem is a bit similar to here: http://ubuntuforums.org/showthread.php?t=1918271 Though I didn't change my x.org config. Only changed home directory and updated packages. I've tried memtest and fschk, both passed. In the recovery mode boot option, I've also realized that same things happen in failsafe graphical mode. But when I go into the network mode, I can boot up my system, but of course same the graphics are just basic. Adding blacklist intel_ips to /etc/modprobe.d/blacklist.conf solves the first message, but still I get the broken pipe and CPU stack traces. The current kernel version is 3.2.0-25, I've tried booting up in the 3.2.0-23(the one the installer came with, but same results.) Also uninstalled apparmor, didn't help. I've installed Ubuntu again, this time without copying the home directory, also same result. --- UPDATE --- This problem was solved before with removing backports, but its back again! I've updated my laptop last night and the problem came back. It's definitely one of these packages. My /var/log/apt/term.log and /var/log/apt/history.log. I'm almost having the same situation. --- UPDATE --- I realized this also have happened on times that I have updated(haven't restarted after it) and my computer power has been cut off and its shutdown due to lack of power. And I realized if I just do as I answered but not in somewhere without GUI(networking mode has the GUI) it wouldn't work.

    Read the article

  • How to implement an intelligent enemy in a shoot-em-up?

    - by bummzack
    Imagine a very simple shoot-em-up, something we all know: You're the player (green). Your movement is restricted to the X axis. Our enemy (or enemies) is at the top of the screen, his movement is also restricted to the X axis. The player fires bullets (yellow) at the enemy. I'd like to implement an A.I. for the enemy that should be really good at avoiding the players bullets. My first idea was to divide the screen into discrete sections and assign weights to them: There are two weights: The "bullet-weight" (grey) is the danger imposed by a bullet. The closer the bullet is to the enemy, the higher the "bullet-weight" (0..1, where 1 is highest danger). Lanes without a bullet have a weight of 0. The second weight is the "distance-weight" (lime-green). For every lane I add 0.2 movement cost (this value is kinda arbitrary now and could be tweaked). Then I simply add the weights (white) and go to the lane with the lowest weight (red). But this approach has an obvious flaw, because it can easily miss local minima as the optimal place to go would be simply between two incoming bullets (as denoted with the white arrow). So here's what I'm looking for: Should find a way through bullet-storm, even when there's no place that doesn't impose a threat of a bullet. Enemy can reliably dodge bullets by picking an optimal (or almost optimal) solution. Algorithm should be able to factor in bullet movement speed (as they might move with different velocities). Ways to tweak the algorithm so that different levels of difficulty can be applied (dumb to super-intelligent enemies). Algorithm should allow different goals, as the enemy doesn't only want to evade bullets, he should also be able to shoot the player. That means that positions where the enemy can fire at the player should be preferred when dodging bullets. So how would you tackle this? Contrary to other games of this genre, I'd like to have only a few, but very "skilled" enemies instead of masses of dumb enemies.

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

  • Suggestions for Summer Intern Application Assignments

    - by orangepips
    As part of our application process we want prospective college interns to complete an assignment on their own - either programming or analytical - to give us something tangible to evaluate such as code or a flowchart. I have two ideas for these assignments, one programming and one analytical, I am interested in gathering feedback about these. Programming Assignment Generate an a month's calendar for a given date. The first row should indicate the days of the week (e.g. Sunday - Saturday). Each subsequent row should contain a week's days. The date supplied should be highlighted (e.g. bolded). I am thinking we'll probably proscribe the output format even more strictly - probably down to what the HTML source should look like including CSS classes. Thinking is this forces answerers to actually do some work if they merely copy a solution from the internet. Analytical Assignment Diagram or describe in prose a system for managing a set of traffic lights for traffic at a four way intersection. Each direction (i.e. North, South, East and West) has two lanes (i.e. right and left). The left lane is turn only and has green arrow light to indicate right of way. The system is able to detect if lanes have cars in them and change the lights accordingly. I would expect a flow chart or some prose describing a finite state machine that deals with each contingency. This would hopefully provide some indication of the applicant's ability to reason through a logic problem of sorts and articulate an approach for solving. Areas Seeking Feedback Is it unreasonable to ask this of applicants? If not, is it better to request before or after a phone screen? Are these questions too hard or easy for a collegiate audience? Any suggestions for alternate questions? Do these seem like good tools for analyzing people who would part of a software development life cycle? Programming language suggestions - I'm thinking Java, Python and/or C# (we're actually a ColdFusion shop).

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • Ubuntu 12.4.1 failing in vm both Vbox and Vmware on new HP Envy 4t-1000

    - by Chas
    Brand new to Linux, getting frustrated trying to get an environment up with Ubuntu. My primary goal is to learn Linux and Apache/PHP development. I need to keep my Windows OS as main on my machine for work, so i'm trying to virtualize Ubuntu 12.4.1 without luck (many attempts). I have a new HP Envy 4t-1000 with 16gb ram, and 32 gb ssd caching with 500gb spindle hard drive. Graphics card is an Intel HD 3000 with AMD Radeon 7670M. With installing Ubuntu desktop in VBox, I'm getting this result: https://forums.virtualbox.org/viewtopic.php?f=6&t=51939 With VMware workstation 7 (patched), I complete the install of Ubuntu, it reboots, purple desktop briefly flashes then it drops to command line. I bought a beginning Ubuntu book, and it recommends trying to manually configure graphics if this happens. So I tried doing a safe boot holding shift - I get to the first screen (GRUB) loads fine, and I choose recovery mode. After choosing the recovery mode, I get the recovery mode options, and can arrow down to what the book suggests 'Run in fail safe graphic mode.' Once I select this option, I get a black screen with a large white dialogue box, at the top it says "The system is running in low-graphics mode. Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself." Then there is an ok button way down at the bottom. When I select 'ok' I get a menu for a few options, book recommended 'reconfigure graphics.' When I try this, I get a menu of two options: 1) "Use generic (default) configuration or 2) use backup. I've tried both options several times, hitting ok just refreshes screen and nothing more. Rebooting at this point just goes back to command line as before. I don't know what to do at this point, I've spent too many hours this weekend trying in both VBox and VMware to get Ubuntu going. Isn't there like a very basic graphic display or something I can use to at least get into the desktop? I explored the GRUB some more, and tried to look at the startup and xserver logs - both are blank. No help there I guess? When I try to choose 'Edit the configuration file, then 'ok' screen just refreshes on same menu options, nothing happens. thx for any advice. I really need to focus on learning Linux, Apache and PHP, so perhaps Ubuntu just won't work on my hardware? Any other suggestions? I will need to virtualize - THANKS for any help/advice.

    Read the article

  • HTML5 platformer collision detection problem

    - by fnx
    I'm working on a 2D platformer game, and I'm having a lot of trouble with collision detection. I've looked trough some tutorials, questions asked here and Stackoverflow, but I guess I'm just too dumb to understand what's wrong with my code. I've wanted to make simple bounding box style collisions and ability to determine on which side of the box the collision happens, but no matter what I do, I always get some weird glitches, like the player gets stuck on the wall or the jumping is jittery. You can test the game here: Platform engine test. Arrow keys move and z = run, x = jump, c = shoot. Try to jump into the first pit and slide on the wall. Here's the collision detection code: function checkCollisions(a, b) { if ((a.x > b.x + b.width) || (a.x + a.width < b.x) || (a.y > b.y + b.height) || (a.y + a.height < b.y)) { return false; } else { handleCollisions(a, b); return true; } } function handleCollisions(a, b) { var a_top = a.y, a_bottom = a.y + a.height, a_left = a.x, a_right = a.x + a.width, b_top = b.y, b_bottom = b.y + b.height, b_left = b.x, b_right = b.x + b.width; if (a_bottom + a.vy > b_top && distanceBetween(a_bottom, b_top) + a.vy < distanceBetween(a_bottom, b_bottom)) { a.topCollision = true; a.y = b.y - a.height + 2; a.vy = 0; a.canJump = true; } else if (a_top + a.vy < b_bottom && distanceBetween(a_top, b_bottom) + a.vy < distanceBetween(a_top, b_top)) { a.bottomCollision = true; a.y = b.y + b.height; a.vy = 0; } else if (a_right + a.vx > b_left && distanceBetween(a_right, b_left) < distanceBetween(a_right, b_right)) { a.rightCollision = true; a.x = b.x - a.width - 3; //a.vx = 0; } else if (a_left + a.vx < b_right && distanceBetween(a_left, b_right) < distanceBetween(a_left, b_left)) { a.leftCollision = true; a.x = b.x + b.width + 3; //a.vx = 0; } } function distanceBetween(a, b) { return Math.abs(b-a); }

    Read the article

  • Properly create centered vertical navigation list with a hover image on both sides?

    - by Damainman
    I have a navigation list I placed inside of a , I am attempting to get an image that appears on both sides of a link when you hover. Basically a an arrow on each side of the link. I have managed to get the effect I am looking for with: <ul> <li style=""> <a href="#">Services</a> </li> <li> <a href="#">About</a> </li> <li> <a href="#">Media</a> </li> <li> <a href="#">FAQ</a> </li> <li> <a href="#">Portfolio</a> </li> <li> <a href="#">Contact</a> </li> </ul> .nav ul{ list-style:none; text-align:center; } .nav li:hover a:before, .nav li:hover a:after { content:url(../images/nav_bullet.png); } The end result will look like the following but with the list centered: link >> Hovered Link << link However I am not sure if that is the best way of going about it, and the image is too close to the text. I tried placing a margin and padding but that didn't work. To top of it off, the image is not vertically centered with the link text. Anyone know of a proper way to do this as I am just going by trial and error? Thank you in advance!

    Read the article

  • How to decrypt an encrypted Apple iTunes iPhone backup?

    - by afit
    I've been asked by a number of unfortunate iPhone users to help them restore data from their iTunes backups. This is easy when they are unencrypted, but not when they are encrypted, whether or not the password is known. As such, I'm trying to figure out the encryption scheme used on mddata and mdinfo files when encrypted. I have no problems reading these files otherwise, and have built some robust C# libraries for doing so. (If you're able to help, I don't care which language you use. It's the principle I'm after here!) The Apple "iPhone OS Enterprise Deployment Guide" states that "Device backups can be stored in encrypted format by selecting the Encrypt iPhone Backup option in the device summary pane of iTunes. Files are encrypted using AES128 with a 256-bit key. The key is stored securely in the iPhone keychain." That's a pretty good clue, and there's some good info here on Stackoverflow on iPhone AES/Rijndael interoperability suggesting a keysize of 128 and CBC mode may be used. Aside from any other obfuscation, a key and initialisation vector (IV)/salt are required. One might assume that the key is a manipulation of the "backup password" that users are prompted to enter by iTunes and passed to "AppleMobileBackup.exe", padded in a fashion dictated by CBC. However, given the reference to the iPhone keychain, I wonder whether the "backup password" might not be used as a password on an X509 certificate or symmetric private key, and that the certificate or private key itself might be used as the key. (AES and the iTunes encrypt/decrypt process is symmetric.) The IV is another matter, and it could be a few things. Perhaps it's one of the keys hard-coded into iTunes, or into the devices themselves. Although Apple's comment above suggests the key is present on the device's keychain, I think this isn't that important. One can restore an encrypted backup to a different device, which suggests all information relevant to the decryption is present in the backup and iTunes configuration, and that anything solely on the device is irrelevant and replacable in this context. So where might be the key be? I've listed paths below from a Windows machine but it's much of a muchness whichever OS we use. The "\appdata\Roaming\Apple Computer\iTunes\itunesprefs.xml" contains a PList with a "Keychain" dict entry in it. The "\programdata\apple\Lockdown\09037027da8f4bdefdea97d706703ca034c88bab.plist" contains a PList with "DeviceCertificate", "HostCertificate", and "RootCertificate", all of which appear to be valid X509 certs. The same file also appears to contain asymmetric keys "RootPrivateKey" and "HostPrivateKey" (my reading suggests these might be PKCS #7-enveloped). Also, within each backup there are "AuthSignature" and "AuthData" values in the Manifest.plist file, although these appear to be rotated as each file gets incrementally backed up, suggested they're not that useful as a key, unless something really quite involved is being done. There's a lot of misleading stuff out there suggesting getting data from encrypted backups is easy. It's not, and to my knowledge it hasn't been done. Bypassing or disabling the backup encryption is another matter entirely, and is not what I'm looking to do. This isn't about hacking apart the iPhone or anything like that. All I'm after here is a means to extract data (photos, contacts, etc.) from encrypted iTunes backups as I can unencrypted ones. I've tried all sorts of permutations with the information I've put down above but got nowhere. I'd appreciate any thoughts or techniques I might have missed.

    Read the article

  • IE8 Developer Tools won't open

    - by Dave
    Just installed IE8 on WinXP. Trying to use the Dev Tools but, clicking on the menu item or hitting F12 doesn't do anything. I can see the option in the tools menu, just can't use it. I've checked to make sure it wasn't opening minimized or off screen and I've checked the registry settings used to disable them. Those registry keys don't even exists. Suggestions? I was going to reinstall, but, thought I'd check here first.

    Read the article

  • SSH Connection Error : ssh_exchange_identification: read: Connection reset by peer

    - by Senthil G
    When I tried to connect to the server via SSH, I'm getting the following error, [root@oneeighty ~]# ssh -vvv -p 443 [email protected] OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx [IP] port 443. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: loaded 3 keys ssh_exchange_identification: read: Connection reset by peer I have checked the SSH configuration on server and client and there are no issues. Restarted the SSH Service on Server and then restarted the server/client. But the issues is not resolved. Please help to fix the issue. Thanks in Advance, -Senthil

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by mingyeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >