Search Results

Search found 21268 results on 851 pages for 'null route'.

Page 343/851 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Very very slow transfer speeds between Windows 7 and samba server running on Ubuntu 11.10/12.04 minimal

    - by kuzyt
    As mentioned in the title I tried transferring files between Windows 7 and the samba server running on both Ubuntu 11.10 and 12.04 but both showed very slow transfer speeds. Can someone please guide me in the right direction to debug this problem ? wget --output-document=/dev/null http://tokyo1.linode.com/100MB-tokyo.bin --2012-08-21 22:02:17-- http://tokyo1.linode.com/100MB-tokyo.bin Resolving tokyo1.linode.com (tokyo1.linode.com)... 106.187.33.12 Connecting to tokyo1.linode.com (tokyo1.linode.com)|106.187.33.12|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: `/dev/null' 8% [=============> ] 8,923,980 64.8K/s eta 15m 0s wlan0 IEEE 802.11abgn ESSID:"TNET" Mode:Managed Frequency:2.462 GHz Access Point: 58:6D:8F:26:20:7A Bit Rate=117 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=57/70 Signal level=-53 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:101 Invalid misc:2448 Missed beacon:0 03:00.0 Network controller: Atheros Communications Inc. AR9300 Wireless LAN adaptor (rev 01) Subsystem: Atheros Communications Inc. Device 3112 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at fea00000 (64-bit, non-prefetchable) [size=128K] Expansion ROM at fea20000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2- AuxCurrent=375mA PME(D0+,D1+,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable- Count=1/4 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [70] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <2us, L1 <64us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Not Supported, TimeoutDis+ DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn- Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [300 v1] Device Serial Number 00-00-00-00-00-00-00-00 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • SQL SERVER – Difference Between CURRENT_TIMESTAMP and GETDATE() – CURRENT_TIMESTAMP Equivalent in SQL Server

    - by pinaldave
    A common question – I often get from Oracle/MySQL Professionals: “What is the Equivalent to CURRENT_TIMESTAMP in SQL Server?” Here is a common question I often get from SQL Server Professionals: “What are differences between Difference Between CURRENT_TIMESTAMP and GETDATE ()?” Very simple question but have showed up so frequently that I feel like to write about it. Well in SQL Server GETDATE() is Equivalent to CURRENT_TIMESTAMP. However, if you use CURRENT_TIMESTAMP in your select statement it will work fine. You can see in the above example – both of them returns the same value. Now let us go to next question regarding difference between GETDATE and CURRENT_TIMESTAMP. Well, the matter of the fact, there is no difference between them in SQL Server (Reference Link). CURRENT_TIMESTAMP is an ANSI SQL function, whereas GETDATE is T-SQL implementation of the same function. Both of them derive value from the operating system of the computer on which SQL Server instance is running. Above discussion prompts another question – in this case, what should one use GETDATE or CURRENT_TIMESTAMP? Well, this is indeed tricky and interesting question. I think I am very comfortable using the GETDATE () so I will go to use it but a matter of the fact there is no right or wrong answer. If you want to follow ancient saying “When in Rome, do as the Romans do”, I suggest using the GETDATE (), or continue using CURRENT_TIMESTAMP. With that said, there is one very important property we all need to keep in mind. If you use CURRENT_TIMESTAMP while creating an object, they are automatically converted to GETDATE() and stored internally. To illustrate what I am suggesting here is the example - Create a table using the following script CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (CURRENT_TIMESTAMP) FOR [Cold2] GO Now go to SSMS and generate the script for the table and you will notice following syntax. CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (GETDATE()) FOR [Cold2] GO You can notice that SQL Server have automatically converted CURRENT_TIMESTAMP to GETDATE(). I guess this gives us an idea how they behave. Now go ahead and make your choice! Do let me know which one will you use CURRENT_TIMESTAMP or GETDATE () in the comments area. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using commands with ApplicationBarMenuItem and ApplicationBarButton in Windows Phone 7

    - by Laurent Bugnion
    Unfortunately, in the current version of the Windows Phone 7 Silverlight framework, it is not possible to attach any command on the ApplicationBarMenuItem and ApplicationBarButton controls. These two controls appear in the Application Bar, for example with the following markup: <phoneNavigation:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar x:Name="MainPageApplicationBar"> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Text="Add City" /> <shell:ApplicationBarMenuItem Text="Add Country" /> </shell:ApplicationBar.MenuItems> <shell:ApplicationBar.Buttons> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.feature.video.rest.png" /> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.feature.settings.rest.png" /> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.refresh.rest.png" /> </shell:ApplicationBar.Buttons> </shell:ApplicationBar> </phoneNavigation:PhoneApplicationPage.ApplicationBar> This code will create the following UI: Application bar, collapsed Application bar, expanded ApplicationBarItems are not, however, controls. A quick look in MSDN shows the following hierarchy for ApplicationBarMenuItem, for example: Unfortunately, this prevents all the mechanisms that are normally used to attach a Command (for example a RelayCommand) to a control. For example, the attached behavior present in the class ButtonBaseExtension (from the Silverlight 3 version of the MVVM Light toolkit) can only be attached to a DependencyObject. Similarly, Blend behaviors (such as EventToCommand from the toolkit’s Extras library) needs a FrameworkElement to work. Using code behind The alternative is to use code behind. As I said in my MIX10 talk, the MVVM police will not take your family away if you use code behind (this quote was actually suggested to me by Glenn Block); the code behind is there for a reason. In our case, invoking a command in the ViewModel requires the following code: In MainPage.xaml: <shell:ApplicationBarMenuItem Text="My Menu 1" Click="ApplicationBarMenuItemClick"/> In MainPage.xaml.cs private void ApplicationBarMenuItemClick( object sender, System.EventArgs e) { var vm = DataContext as MainViewModel; if (vm != null) { vm.MyCommand.Execute(null); } } Conclusion Resorting to code behind to bridge the gap between the View and the ViewModel is less elegant than using attached behaviors, either through an attached property or through a Blend behavior. It does, however, work fine. I don’t have any information if future changes in the Windows Phone 7 Application Bar API will make this easier. In the mean time, I would recommend using code behind instead.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Working With Extended Events

    - by Fatherjack
    SQL Server 2012 has made working with Extended Events (XE) pretty simple when it comes to what sessions you have on your servers and what options you have selected and so forth but if you are like me then you still have some SQL Server instances that are 2008 or 2008 R2. For those servers there is no built-in way to view the Extended Event sessions in SSMS. I keep coming up against the same situations – Where are the xel log files? What events, actions or predicates are set for the events on the server? What sessions are there on the server already? I got tired of this being a perpetual question and wrote some TSQL to save as a snippet in SQL Prompt so that these details are permanently only a couple of clicks away. First, some history. If you just came here for the code skip down a few paragraphs and it’s all there. If you want a little time to reminisce about SQL Server then stick with me through the next paragraph or two. We are in a bit of a cross-over period currently, there are many versions of SQL Server but I would guess that SQL Server 2008, 2008 R2 and 2012 comprise the majority of installations. With each of these comes a set of management tools, of which SQL Server Management Studio (SSMS) is one. In 2008 and 2008 R2 Extended Events made their first appearance and there was no way to work with them in the SSMS interface. At some point the Extended Events guru Jonathan Kehayias (http://www.sqlskills.com/blogs/jonathan/) created the SQL Server 2008 Extended Events SSMS Addin which is really an excellent tool to ease XE session administration. This addin will install in SSMS 2008 or 2008R2 but not SSMS 2012. If you use a compatible version of SSMS then I wholly recommend downloading and using it to make your work with XE much easier. If you have SSMS 2012 installed, and there is no reason not to as it will let you work with all versions of SQL Server, then you cannot install this addin. If you are working with SQL Server 2012 then SSMS 2012 has built in functionality to manage XE sessions – this functionality does not apply for 2008 or 2008 R2 instances though. This means you are somewhat restricted and have to use TSQL to manage XE sessions on older versions of SQL Server. OK, those of you that skipped ahead for the code, you need to start from here: So, you are working with SSMS 2012 but have a SQL Server that is an earlier version that needs an XE session created or you think there is a session created but you aren’t sure, or you know it’s there but can’t remember if it is running and where the output is going. How do you find out? Well, none of the information is hidden as such but it is a bit of a wrangle to locate it and it isn’t a lot of code that is unlikely to remain in your memory. I have created two pieces of code. The first examines the SYS.Server_Event_… management views in combination with the SYS.DM_XE_… management views to give the name of all sessions that exist on the server, regardless of whether they are running or not and two pieces of TSQL code. One piece will alter the state of the session: if the session is running then the code will stop the session if executed and vice versa. The other piece of code will drop the selected session. If the session is running then the code will stop it first. Do not execute the DROP code unless you are sure you have the Create code to hand. It will be dropped from the server without a second chance to change your mind. /**************************************************************/ /***   To locate and describe event sessions on a server    ***/ /***                                                        ***/ /***   Generates TSQL to start/stop/drop sessions           ***/ /***                                                        ***/ /***        Jonathan Allen - @fatherjack                    ***/ /***                 June 2013                                ***/ /***                                                        ***/ /**************************************************************/ SELECT  [EES].[name] AS [Session Name - all sessions] ,         CASE WHEN [MXS].[name] IS NULL THEN ISNULL([MXS].[name], 'Stopped')              ELSE 'Running'         END AS SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'ALTER EVENT SESSION [' + [EES].[name]                          + '] ON SERVER STATE = START;')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP;'         END AS ALTER_SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'DROP EVENT SESSION [' + [EES].[name]                          + '] ON SERVER; -- This WILL drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP; ' + CHAR(10)                   + '-- DROP EVENT SESSION [' + [EES].[name]                   + '] ON SERVER; -- This WILL stop and drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.'         END AS DROP_Session FROM    [sys].[server_event_sessions] AS EES         LEFT JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name] WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) ORDER BY SessionState GO I have excluded the system_health and AlwaysOn sessions as I don’t want to accidentally execute the drop script for these sessions that are created as part of the SQL Server installation. It is possible to recreate the sessions but that is a whole lot of aggravation I’d rather avoid. The second piece of code gathers details of running XE sessions only and provides information on the Events being collected, any predicates that are set on those events, the actions that are set to be collected, where the collected information is being logged and if that logging is to a file target, where that file is located. /**********************************************/ /***    Running Session summary                ***/ /***                                        ***/ /***    Details key values of XE sessions     ***/ /***    that are in a running state            ***/ /***                                        ***/ /***        Jonathan Allen - @fatherjack    ***/ /***        June 2013                        ***/ /***                                        ***/ /**********************************************/ SELECT  [EES].[name] AS [Session Name - running sessions] ,         [EESE].[name] AS [Event Name] ,         COALESCE([EESE].[predicate], 'unfiltered') AS [Event Predicate Filter(s)] ,         [EESA].[Action] AS [Event Action(s)] ,         [EEST].[Target] AS [Session Target(s)] ,         ISNULL([EESF].[value], 'No file target in use') AS [File_Target_UNC] -- select * FROM    [sys].[server_event_sessions] AS EES         INNER JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name]         INNER JOIN [sys].[server_event_session_events] AS [EESE] ON [EES].[event_session_id] = [EESE].[event_session_id]         LEFT JOIN [sys].[server_event_session_fields] AS EESF ON ( [EES].[event_session_id] = [EESF].[event_session_id]                                                               AND [EESF].[name] = 'filename'                                                               )         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + sest.name                                         FROM    [sys].[server_event_session_targets]                                                 AS SEST                                         WHERE   [EES].[event_session_id] = [SEST].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Target]                     ) AS EEST         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + [sesa].NAME                                         FROM    [sys].[server_event_session_actions]                                                 AS sesa                                         WHERE   [sesa].[event_session_id] = [EES].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Action]                     ) AS EESA WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) /*Optional to exclude 'out-of-the-box' traces*/ I hope that these scripts are useful to you and I would be obliged if you would keep my name in the script comments. I have no problem with you using it in production or personal circumstances, however it has no warranty or guarantee. Don’t use it unless you understand it and are happy with what it is going to do. I am not ever responsible for the consequences of executing this script on your servers.

    Read the article

  • SQL Server Boolean (bit datatype)

    - by Derek D.
    In SQL Server, boolean values can be represented using the bit datatype. Bit values differ from boolean values in that a bit can actually be one of three values 1, 0, or NULL; while booleans can only either be true or false. When assigning bits, it is best to use 1 or zero [...]

    Read the article

  • SSIS: Deploying OLAP cubes using C# script tasks and AMO

    - by DrJohn
    As part of the continuing series on Building dynamic OLAP data marts on-the-fly, this blog entry will focus on how to automate the deployment of OLAP cubes using SQL Server Integration Services (SSIS) and Analysis Services Management Objects (AMO). OLAP cube deployment is usually done using the Analysis Services Deployment Wizard. However, this option was dismissed for a variety of reasons. Firstly, invoking external processes from SSIS is fraught with problems as (a) it is not always possible to ensure SSIS waits for the external program to terminate; (b) we cannot log the outcome properly and (c) it is not always possible to control the server's configuration to ensure the executable works correctly. Another reason for rejecting the Deployment Wizard is that it requires the 'answers' to be written into four XML files. These XML files record the three things we need to change: the name of the server, the name of the OLAP database and the connection string to the data mart. Although it would be reasonably straight forward to change the content of the XML files programmatically, this adds another set of complication and level of obscurity to the overall process. When I first investigated the possibility of using C# to deploy a cube, I was surprised to find that there are no other blog entries about the topic. I can only assume everyone else is happy with the Deployment Wizard! SSIS "forgets" assembly references If you build your script task from scratch, you will have to remember how to overcome one of the major annoyances of working with SSIS script tasks: the forgetful nature of SSIS when it comes to assembly references. Basically, you can go through the process of adding an assembly reference using the Add Reference dialog, but when you close the script window, SSIS "forgets" the assembly reference so the script will not compile. After repeating the operation several times, you will find that SSIS only remembers the assembly reference when you specifically press the Save All icon in the script window. This problem is not unique to the AMO assembly and has certainly been a "feature" since SQL Server 2005, so I am not amazed it is still present in SQL Server 2008 R2! Sample Package So let's take a look at the sample SSIS package I have provided which can be downloaded from here: DeployOlapCubeExample.zip  Below is a screenshot after a successful run. Connection Managers The package has three connection managers: AsDatabaseDefinitionFile is a file connection manager pointing to the .asdatabase file you wish to deploy. Note that this can be found in the bin directory of you OLAP database project once you have clicked the "Build" button in Visual Studio TargetOlapServerCS is an Analysis Services connection manager which identifies both the deployment server and the target database name. SourceDataMart is an OLEDB connection manager pointing to the data mart which is to act as the source of data for your cube. This will be used to replace the connection string found in your .asdatabase file Once you have configured the connection managers, the sample should run and deploy your OLAP database in a few seconds. Of course, in a production environment, these connection managers would be associated with package configurations or set at runtime. When you run the sample, you should see that the script logs its activity to the output screen (see screenshot above). If you configure logging for the package, then these messages will also appear in your SSIS logging. Sample Code Walkthrough Next let's walk through the code. The first step is to parse the connection string provided by the TargetOlapServerCS connection manager and obtain the name of both the target OLAP server and also the name of the OLAP database. Note that the target database does not have to exist to be referenced in an AS connection manager, so I am using this as a convenient way to define both properties. We now connect to the server and check for the existence of the OLAP database. If it exists, we drop the database so we can re-deploy. svr.Connect(olapServerName); if (svr.Connected) { // Drop the OLAP database if it already exists Database db = svr.Databases.FindByName(olapDatabaseName); if (db != null) { db.Drop(); } // rest of script } Next we start building the XMLA command that will actually perform the deployment. Basically this is a small chuck of XML which we need to wrap around the large .asdatabase file generated by the Visual Studio build process. // Start generating the main part of the XMLA command XmlDocument xmlaCommand = new XmlDocument(); xmlaCommand.LoadXml(string.Format("<Batch Transaction='false' xmlns='http://schemas.microsoft.com/analysisservices/2003/engine'><Alter AllowCreate='true' ObjectExpansion='ExpandFull'><Object><DatabaseID>{0}</DatabaseID></Object><ObjectDefinition/></Alter></Batch>", olapDatabaseName));  Next we need to merge two XML files which we can do by simply using setting the InnerXml property of the ObjectDefinition node as follows: // load OLAP Database definition from .asdatabase file identified by connection manager XmlDocument olapCubeDef = new XmlDocument(); olapCubeDef.Load(Dts.Connections["AsDatabaseDefinitionFile"].ConnectionString); // merge the two XML files by obtain a reference to the ObjectDefinition node oaRootNode.InnerXml = olapCubeDef.InnerXml;   One hurdle I had to overcome was removing detritus from the .asdabase file left by the Visual Studio build. Through an iterative process, I found I needed to remove several nodes as they caused the deployment to fail. The XMLA error message read "Cannot set read-only node: CreatedTimestamp" or similar. In comparing the XMLA generated with by the Deployment Wizard with that generated by my code, these read-only nodes were missing, so clearly I just needed to strip them out. This was easily achieved using XPath to find the relevant XML nodes, of which I show one example below: foreach (XmlNode node in rootNode.SelectNodes("//ns1:CreatedTimestamp", nsManager)) { node.ParentNode.RemoveChild(node); } Now we need to change the database name in both the ID and Name nodes using code such as: XmlNode databaseID = xmlaCommand.SelectSingleNode("//ns1:Database/ns1:ID", nsManager); if (databaseID != null) databaseID.InnerText = olapDatabaseName; Finally we need to change the connection string to point at the relevant data mart. Again this is easily achieved using XPath to search for the relevant nodes and then replace the content of the node with the new name or connection string. XmlNode connectionStringNode = xmlaCommand.SelectSingleNode("//ns1:DataSources/ns1:DataSource/ns1:ConnectionString", nsManager); if (connectionStringNode != null) { connectionStringNode.InnerText = Dts.Connections["SourceDataMart"].ConnectionString; } Finally we need to perform the deployment using the Execute XMLA command and check the returned XmlaResultCollection for errors before setting the Dts.TaskResult. XmlaResultCollection oResults = svr.Execute(xmlaCommand.InnerXml);  // check for errors during deployment foreach (Microsoft.AnalysisServices.XmlaResult oResult in oResults) { foreach (Microsoft.AnalysisServices.XmlaMessage oMessage in oResult.Messages) { if ((oMessage.GetType().Name == "XmlaError")) { FireError(oMessage.Description); HadError = true; } } } If you are not familiar with XML programming, all this may all seem a bit daunting, but perceiver as the sample code is pretty short. If you would like the script to process the OLAP database, simply uncomment the lines in the vicinity of Process method. Of course, you can extend the script to perform your own custom processing and to even synchronize the database to a front-end server. Personally, I like to keep the deployment and processing separate as the code can become overly complex for support staff.If you want to know more, come see my session at the forthcoming SQLBits conference.

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

  • Textures do not render on ATI graphics cards?

    - by Mathias Lykkegaard Lorenzen
    I'm rendering textured quads to an orthographic view in XNA through hardware instancing. On Nvidia graphics cards, this all works, tested on 3 machines. On ATI cards, it doesn't work at all, tested on 2 machines. How come? Culling perhaps? My orthographic view is set up like this: Matrix projection = Matrix.CreateOrthographicOffCenter(0, graphicsDevice.Viewport.Width, -graphicsDevice.Viewport.Height, 0, 0, 1); And my elements are rendered with the Z-coordinate 0. Edit: I just figured out something weird. If I do not call this spritebatch code above doing my textured quad rendering code, then it won't work on Nvidia cards either. Could that be due to culling information or something like that? Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone); ... spriteBatch.End(); Edit 2: Here's the full code for my instancing call. public void DrawTextures() { Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone, textureEffect); while (texturesToDraw.Count > 0) { TextureJob texture = texturesToDraw.Dequeue(); spriteBatch.Draw(texture.Texture, texture.DestinationRectangle, texture.TintingColor); } spriteBatch.End(); #if !NOTEXTUREINSTANCING // no work to do if (positionInBufferTextured > 0) { device.BlendState = BlendState.Opaque; textureEffect.CurrentTechnique = textureEffect.Techniques["Technique1"]; textureEffect.Parameters["Texture"].SetValue(darkTexture); textureEffect.CurrentTechnique.Passes[0].Apply(); if ((textureInstanceBuffer == null) || (positionInBufferTextured > textureInstanceBuffer.VertexCount)) { if (textureInstanceBuffer != null) textureInstanceBuffer.Dispose(); textureInstanceBuffer = new DynamicVertexBuffer(device, texturedInstanceVertexDeclaration, positionInBufferTextured, BufferUsage.WriteOnly); } if (positionInBufferTextured > 0) { textureInstanceBuffer.SetData(texturedInstances, 0, positionInBufferTextured, SetDataOptions.Discard); } device.Indices = textureIndexBuffer; device.SetVertexBuffers(textureGeometryBuffer, new VertexBufferBinding(textureInstanceBuffer, 0, 1)); device.DrawInstancedPrimitives(PrimitiveType.TriangleStrip, 0, 0, textureGeometryBuffer.VertexCount, 0, 2, positionInBufferTextured); // now that we've drawn, it's ok to reset positionInBuffer back to zero, // and write over any vertices that may have been set previously. positionInBufferTextured = 0; } #endif }

    Read the article

  • Deselect first row on gridview onload

    - by Suresh Behera
    I had situation to deselect the first gridview row on load and came to know IsSynchronizedWithCurrentItem on Gridview should able to that but some how i missed on gridview. Mean while below one should work void gvMain_RowLoaded( object sender, RowLoadedEventArgs e) { try { GridViewRow row = e.Row as GridViewRow; if (row != null && !firstItemExpanded) { row.DetailsVisibility = Visibility.Collapsed; firstItemExpanded = false ; } } catch (Exception ex) { throw ex; } } .csharpcode, .csharpcode...(read more)

    Read the article

  • Test interface implementation

    - by Michael
    I have a interface in our code base that I would like to be able to mock out for unit testing. I am writing a test implementation to allow the individual tests to be able to override the specific methods they are concerned with rather than implementing every method. I've run into a quandary over how the test implementation should behave if the test fails to override a method used by the method under test. Should I return a "non-value" (0, null) in the test implementation or throw a UnsupportedOperationException to explicitly fail the test?

    Read the article

  • AWS .NET SDK v2: the message-pump pattern

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/11/aws-.net-sdk-v2--the-message-pump-pattern.aspxVersion 2 of the AWS SDK for .NET has had a few pre-release iterations on NuGet and is stable, if a bit lacking in step-by-step guides. There’s at least one big reason to try it out: the SQS queue client now supports asynchronous reads, so you don’t need a clumsy polling mechanism to retrieve messages. The new approach  is easy to use, and lets you work with AWS queues in a similar way to the message-pump pattern used in the latest Azure SDK for Service Bus queues and topics. I’ve posted a simple wrapper class for subscribing to an SQS hub on gist here: A wrapper for the SQS client in the AWS SDK for.NET v2, which uses the message-pump pattern. Here’s the core functionality in the subscribe method: private async void Subscribe() { if (_isListening) { var request = new ReceiveMessageRequest { MaxNumberOfMessages = 10 }; request.QueueUrl = QueueUrl; var result = await _sqsClient.ReceiveMessageAsync(request, _cancellationTokenSource.Token); if (result.Messages.Count > 0) { foreach (var message in result.Messages) { if (_receiveAction != null && message != null) { _receiveAction(message.Body); DeleteMessage(message.ReceiptHandle); } } } } if (_isListening) { Subscribe(); } } which you call with something like this: client.Subscribe(x=>Log.Debug(x.Body)); The async SDK call returns when there is something in the queue, and will run your receive action for every message it gets in the batch (defaults to the maximum size of 10 messages per call). The listener will sit there awaiting messages until you stop it with: client.Unsubscribe(); Internally it has a cancellation token which it sets when you call unsubscribe, which cancels any in-flight call to SQS and stops the pump. The wrapper will also create the queue if it doesn’t exist at runtime. The Ensure() method gets called in the constructor so when you first use the client for a queue (sending or subscribing), it will set itself up: if (!Exists()) { var request = new CreateQueueRequest(); request.QueueName = QueueName; var response = _sqsClient.CreateQueue(request); QueueUrl = response.QueueUrl; } The Exists() check has to do make a call to ListQueues on the SNS client, as it doesn’t provide its own method to check if a queue exists. That call also populates the Amazon Resource Name, the unique identifier for this queue, which will be useful later. To use the wrapper, just instantiate and go: var queueClient = new QueueClient(“ProcessWorkflow”); queueClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. queueClient.Send(message);

    Read the article

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • const vs. readonly for a singleton

    - by GlenH7
    First off, I understand there are folk who oppose the use of singletons. I think it's an appropriate use in this case as it's constant state information, but I'm open to differing opinions / solutions. (See The singleton pattern and When should the singleton pattern not be used?) Second, for a broader audience: C++/CLI has a similar keyword to readonly with initonly, so this isn't strictly a C# type question. (Literal field versus constant variable in C++/CLI) Sidenote: A discussion of some of the nuances on using const or readonly. My Question: I have a singleton that anchors together some different data structures. Part of what I expose through that singleton are some lists and other objects, which represent the necessary keys or columns in order to connect the linked data structures. I doubt that anyone would try to change these objects through a different module, but I want to explicitly protect them from that risk. So I'm currently using a "readonly" modifier on those objects*. I'm using readonly instead of const with the lists as I read that using const will embed those items in the referencing assemblies and will therefore trigger a rebuild of those referencing assemblies if / when the list(s) is/are modified. This seems like a tighter coupling than I would want between the modules, but I wonder if I'm obsessing over a moot point. (This is question #2 below) The alternative I see to using "readonly" is to make the variables private and then wrap them with a public get. I'm struggling to see the advantage of this approach as it seems like wrapper code that doesn't provide much additional benefit. (This is question #1 below) It's highly unlikely that we'll change the contents or format of the lists - they're a compilation of things to avoid using magic strings all over the place. Unfortunately, not all the code has converted over to using this singleton's presentation of those strings. Likewise, I don't know that we'd change the containers / classes for the lists. So while I normally argue for the encapsulations advantages a get wrapper provides, I'm just not feeling it in this case. A representative sample of my singleton public sealed class mySingl { private static volatile mySingl sngl; private static object lockObject = new Object(); public readonly Dictionary<string, string> myDict = new Dictionary<string, string>() { {"I", "index"}, {"D", "display"}, }; public enum parms { ABC = 10, DEF = 20, FGH = 30 }; public readonly List<parms> specParms = new List<parms>() { parms.ABC, parms.FGH }; public static mySingl Instance { get { if(sngl == null) { lock(lockObject) { if(sngl == null) sngl = new mySingl(); } } return sngl; } } private mySingl() { doSomething(); } } Questions: Am I taking the most reasonable approach in this case? Should I be worrying about const vs. readonly? is there a better way of providing this information?

    Read the article

  • Setting up cube map texture parameters in OpenGL

    - by KaiserJohaan
    I see alot of tutorials and sources use the following code snippet when defining each face of a cube map: for (i = 0; i < 6; i++) glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, InternalFormat, size, size, 0, Format, Type, NULL); Is it safe to assume GL_TEXTURE_CUBE_MAP_POSITIVE_X + i will properly iterate the following cube map targets, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y etc?

    Read the article

  • Given an XML which contains a representation of a graph, how to apply it DFS algorithm? [on hold]

    - by winston smith
    Given the followin XML which is a directed graph: <?xml version="1.0" encoding="iso-8859-1" ?> <!DOCTYPE graph PUBLIC "-//FC//DTD red//EN" "../dtd/graph.dtd"> <graph direct="1"> <vertex label="V0"/> <vertex label="V1"/> <vertex label="V2"/> <vertex label="V3"/> <vertex label="V4"/> <vertex label="V5"/> <edge source="V0" target="V1" weight="1"/> <edge source="V0" target="V4" weight="1"/> <edge source="V5" target="V2" weight="1"/> <edge source="V5" target="V4" weight="1"/> <edge source="V1" target="V2" weight="1"/> <edge source="V1" target="V3" weight="1"/> <edge source="V1" target="V4" weight="1"/> <edge source="V2" target="V3" weight="1"/> </graph> With this classes i parsed the graph and give it an adjacency list representation: import java.io.IOException; import java.util.HashSet; import java.util.LinkedList; import java.util.Collection; import java.util.Iterator; import java.util.logging.Level; import java.util.logging.Logger; import practica3.util.Disc; public class ParsingXML { public static void main(String[] args) { try { // TODO code application logic here Collection<Vertex> sources = new HashSet<Vertex>(); LinkedList<String> lines = Disc.readFile("xml/directed.xml"); for (String lin : lines) { int i = Disc.find(lin, "source=\""); String data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } Vertex v = new Vertex(); v.setName(data); v.setAdy(new HashSet<Vertex>()); sources.add(v); } } Iterator it = sources.iterator(); while (it.hasNext()) { Vertex ver = (Vertex) it.next(); Collection<Vertex> adyacencias = ver.getAdy(); LinkedList<String> ls = Disc.readFile("xml/graphs.xml"); for (String lin : ls) { int i = Disc.find(lin, "target=\""); String data = ""; if (lin.contains("source=\""+ver.getName())) { Vertex v = new Vertex(); if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setName(data); } i = Disc.find(lin, "weight=\""); data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setWeight(Integer.parseInt(data)); } if (v.getName() != null) { adyacencias.add(v); } } } } for (Vertex vert : sources) { System.out.println(vert); System.out.println("adyacencias: " + vert.getAdy()); } } catch (IOException ex) { Logger.getLogger(ParsingXML.class.getName()).log(Level.SEVERE, null, ex); } } } This is another class: import java.util.Collection; import java.util.Objects; public class Vertex { private String name; private int weight; private Collection ady; public Collection getAdy() { return ady; } public void setAdy(Collection adyacencias) { this.ady = adyacencias; } public String getName() { return name; } public void setName(String nombre) { this.name = nombre; } public int getWeight() { return weight; } public void setWeight(int weight) { this.weight = weight; } @Override public int hashCode() { int hash = 7; hash = 43 * hash + Objects.hashCode(this.name); hash = 43 * hash + this.weight; return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Vertex other = (Vertex) obj; if (!Objects.equals(this.name, other.name)) { return false; } if (this.weight != other.weight) { return false; } return true; } @Override public String toString() { return "Vertice{" + "name=" + name + ", weight=" + weight + '}'; } } And finally: /** * * @author user */ /* -*-jde-*- */ /* <Disc.java> Contains the main argument*/ import java.io.*; import java.util.LinkedList; /** * Lectura y escritura de archivos en listas de cadenas * Ideal para el uso de las clases para gráficas. * * @author Peralta Santa Anna Victor Miguel * @since Julio 2011 */ public class Disc { /** * Metodo para lectura de un archivo * * @param fileName archivo que se va a leer * @return El archivo en representacion de lista de cadenas */ public static LinkedList<String> readFile(String fileName) throws IOException { BufferedReader file = new BufferedReader(new FileReader(fileName)); LinkedList<String> textlist = new LinkedList<String>(); while (file.ready()) { textlist.add(file.readLine().trim()); } file.close(); /* for(String linea:textlist){ if(linea.contains("source")){ //String generado = linea.replaceAll("<\\w+\\s+\"", ""); //System.out.println(generado); } }*/ return textlist; }//readFile public static int find(String linea,String palabra){ int i,j; boolean found = false; for(i=0,j=0;i<linea.length();i++){ if(linea.charAt(i)==palabra.charAt(j)){ j++; if(j==palabra.length()){ found = true; return i; } }else{ continue; } } if(!found){ i= -1; } return i; } /** * Metodo para la escritura de un archivo * * @param fileName archivo que se va a escribir * @param tofile la lista de cadenas que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String fileName, LinkedList<String> tofile, boolean append) throws IOException { FileWriter file = new FileWriter(fileName, append); for (int i = 0; i < tofile.size(); i++) { file.write(tofile.get(i) + "\n"); } file.close(); }//writeFile /** * Metodo para escritura de un archivo * @param msg archivo que se va a escribir * @param tofile la cadena que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String msg, String tofile, boolean append) throws IOException { FileWriter file = new FileWriter(msg, append); file.write(tofile); file.close(); }//writeFile }// I'm stuck on what can be the best way to given an adjacency list representation of the graph how to apply it Depth-first search algorithm. Any idea of how to aproach to complete the task?

    Read the article

  • How do I draw a scene with 2 nested frames

    - by Guido Granobles
    I have been trying for long time to figure out this: I have loaded a model from a directx file (I am using opengl and Java) the model have a hierarchical system of nested reference frames (there are not bones). There are just 2 frames, one of them is called x3ds_Torso and it has a child frame called x3ds_Arm_01. Each one of them has a mesh. The thing is that I can't draw the arm connected to the body. Sometimes the body is in the center of the screen and the arm is at the top. Sometimes they are both in the center. I know that I have to multiply the matrix transformation of every frame by its parent frame starting from the top to the bottom and after that I have to multiply every vertex of every mesh by its final transformation matrix. So I have this: public void calculeFinalMatrixPosition(Bone boneParent, Bone bone) { System.out.println("-->" + bone.name); if (boneParent != null) { bone.matrixCombined = bone.matrixTransform.multiply(boneParent.matrixCombined); } else { bone.matrixCombined = bone.matrixTransform; } bone.matrixFinal = bone.matrixCombined; for (Bone childBone : bone.boneChilds) { calculeFinalMatrixPosition(bone, childBone); } } Then I have to multiply every vertex of the mesh: public void transformVertex(Bone bone) { for (Iterator<Mesh> iterator = meshes.iterator(); iterator.hasNext();) { Mesh mesh = iterator.next(); if (mesh.boneName.equals(bone.name)) { float[] vertex = new float[4]; double[] newVertex = new double[3]; if (mesh.skinnedVertexBuffer == null) { mesh.skinnedVertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); } mesh.vertexBuffer.buffer.rewind(); while (mesh.vertexBuffer.buffer.hasRemaining()) { vertex[0] = mesh.vertexBuffer.buffer.get(); vertex[1] = mesh.vertexBuffer.buffer.get(); vertex[2] = mesh.vertexBuffer.buffer.get(); vertex[3] = 1; newVertex = bone.matrixFinal.transpose().multiply(vertex); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[0])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[1])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[2])); } mesh.vertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); mesh.skinnedVertexBuffer.buffer.rewind(); mesh.vertexBuffer.buffer.put(mesh.skinnedVertexBuffer.buffer); } } for (Bone childBone : bone.boneChilds) { transformVertex(childBone); } } I know this is not the more efficient code but by now I just want to understand exactly how a hierarchical model is organized and how I can draw it on the screen. Thanks in advance for your help.

    Read the article

  • Oracle Database 12 c New Partition Maintenance Features by Gwen Lazenby

    - by hamsun
    One of my favourite new features in Oracle Database 12c is the ability to perform partition maintenance operations on multiple partitions. This means we can now add, drop, truncate and merge multiple partitions in one operation, and can split a single partition into more than two partitions also in just one command. This would certainly have made my life slightly easier had it been available when I administered a data warehouse at Oracle 9i. To demonstrate this new functionality and syntax, I am going to create two tables, ORDERS and ORDERS_ITEMS which have a parent-child relationship. ORDERS is to be partitioned using range partitioning on the ORDER_DATE column, and ORDER_ITEMS is going to partitioned using reference partitioning and its foreign key relationship with the ORDERS table. This form of partitioning was a new feature in 11g and means that any partition maintenance operations performed on the ORDERS table will also take place on the ORDER_ITEMS table as well. First create the ORDERS table - SQL CREATE TABLE orders ( order_id NUMBER(12), order_date TIMESTAMP, order_mode VARCHAR2(8), customer_id NUMBER(6), order_status NUMBER(2), order_total NUMBER(8,2), sales_rep_id NUMBER(6), promotion_id NUMBER(6), CONSTRAINT orders_pk PRIMARY KEY(order_id) ) PARTITION BY RANGE(order_date) (PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')), PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')), PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')), PARTITION Q4_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY')) ); Table created. Now the ORDER_ITEMS table SQL CREATE TABLE order_items ( order_id NUMBER(12) NOT NULL, line_item_id NUMBER(3) NOT NULL, product_id NUMBER(6) NOT NULL, unit_price NUMBER(8,2), quantity NUMBER(8), CONSTRAINT order_items_fk FOREIGN KEY(order_id) REFERENCES orders(order_id) on delete cascade) PARTITION BY REFERENCE(order_items_fk) tablespace example; Table created. Now look at DBA_TAB_PARTITIONS to get details of what partitions we have in the two tables – SQL select table_name,partition_name, partition_position position, high_value from dba_tab_partitions where table_owner='SH' and table_name like 'ORDER_%' order by partition_position, table_name; TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Q1_2007 1 TIMESTAMP' 2007-04-01 00:00:00' ORDER_ITEMS Q1_2007 1 ORDERS Q2_2007 2 TIMESTAMP' 2007-07-01 00:00:00' ORDER_ITEMS Q2_2007 2 ORDERS Q3_2007 3 TIMESTAMP' 2007-10-01 00:00:00' ORDER_ITEMS Q3_2007 3 ORDERS Q4_2007 4 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Q4_2007 4 Just as an aside it is also now possible in 12c to use interval partitioning on reference partitioned tables. In 11g it was not possible to combine these two new partitioning features. For our first example of the new 12cfunctionality, let us add all the partitions necessary for 2008 to the tables using one command. Notice that the partition specification part of the add command is identical in format to the partition specification part of the create command as shown above - SQL alter table orders add PARTITION Q1_2008 VALUES LESS THAN (TO_DATE('01-APR-2008','DD-MON-YYYY')), PARTITION Q2_2008 VALUES LESS THAN (TO_DATE('01-JUL-2008','DD-MON-YYYY')), PARTITION Q3_2008 VALUES LESS THAN (TO_DATE('01-OCT-2008','DD-MON-YYYY')), PARTITION Q4_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY')); Table altered. Now look at DBA_TAB_PARTITIONS and we can see that the 4 new partitions have been added to both tables – SQL select table_name,partition_name, partition_position position, high_value from dba_tab_partitions where table_owner='SH' and table_name like 'ORDER_%' order by partition_position, table_name; TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Q1_2007 1 TIMESTAMP' 2007-04-01 00:00:00' ORDER_ITEMS Q1_2007 1 ORDERS Q2_2007 2 TIMESTAMP' 2007-07-01 00:00:00' ORDER_ITEMS Q2_2007 2 ORDERS Q3_2007 3 TIMESTAMP' 2007-10-01 00:00:00' ORDER_ITEMS Q3_2007 3 ORDERS Q4_2007 4 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Q4_2007 4 ORDERS Q1_2008 5 TIMESTAMP' 2008-04-01 00:00:00' ORDER_ITEMS Q1_2008 5 ORDERS Q2_2008 6 TIMESTAMP' 2008-07-01 00:00:00' ORDER_ITEM Q2_2008 6 ORDERS Q3_2008 7 TIMESTAMP' 2008-10-01 00:00:00' ORDER_ITEMS Q3_2008 7 ORDERS Q4_2008 8 TIMESTAMP' 2009-01-01 00:00:00' ORDER_ITEMS Q4_2008 8 Next, we can drop or truncate multiple partitions by giving a comma separated list in the alter table command. Note the use of the plural ‘partitions’ in the command as opposed to the singular ‘partition’ prior to 12c– SQL alter table orders drop partitions Q3_2008,Q2_2008,Q1_2008; Table altered. Now look at DBA_TAB_PARTITIONS and we can see that the 3 partitions have been dropped in both the two tables – TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Q1_2007 1 TIMESTAMP' 2007-04-01 00:00:00' ORDER_ITEMS Q1_2007 1 ORDERS Q2_2007 2 TIMESTAMP' 2007-07-01 00:00:00' ORDER_ITEMS Q2_2007 2 ORDERS Q3_2007 3 TIMESTAMP' 2007-10-01 00:00:00' ORDER_ITEMS Q3_2007 3 ORDERS Q4_2007 4 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Q4_2007 4 ORDERS Q4_2008 5 TIMESTAMP' 2009-01-01 00:00:00' ORDER_ITEMS Q4_2008 5 Now let us merge all the 2007 partitions together to form one single partition – SQL alter table orders merge partitions Q1_2005, Q2_2005, Q3_2005, Q4_2005 into partition Y_2007; Table altered. TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Y_2007 1 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Y_2007 1 ORDERS Q4_2008 2 TIMESTAMP' 2009-01-01 00:00:00' ORDER_ITEMS Q4_2008 2 Splitting partitions is a slightly more involved. In the case of range partitioning one of the new partitions must have no high value defined, and in list partitioning one of the new partitions must have no list of values defined. I call these partitions the ‘everything else’ partitions, and will contain any rows contained in the original partition that are not contained in the any of the other new partitions. For example, let us split the Y_2007 partition back into 4 quarterly partitions – SQL alter table orders split partition Y_2007 into (PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')), PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')), PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')), PARTITION Q4_2007); Now look at DBA_TAB_PARTITIONS to get details of the new partitions – TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Q1_2007 1 TIMESTAMP' 2007-04-01 00:00:00' ORDER_ITEMS Q1_2007 1 ORDERS Q2_2007 2 TIMESTAMP' 2007-07-01 00:00:00' ORDER_ITEMS Q2_2007 2 ORDERS Q3_2007 3 TIMESTAMP' 2007-10-01 00:00:00' ORDER_ITEMS Q3_2007 3 ORDERS Q4_2007 4 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Q4_2007 4 ORDERS Q4_2008 5 TIMESTAMP' 2009-01-01 00:00:00' ORDER_ITEMS Q4_2008 5 Partition Q4_2007 has a high value equal to the high value of the original Y_2007 partition, and so has inherited its upper boundary from the partition that was split. As for a list partitioning example let look at the following another table, SALES_PAR_LIST, which has 2 partitions, Americas and Europe and a partitioning key of country_name. SQL select table_name,partition_name, high_value from dba_tab_partitions where table_owner='SH' and table_name = 'SALES_PAR_LIST'; TABLE_NAME PARTITION_NAME HIGH_VALUE -------------- --------------- ----------------------------- SALES_PAR_LIST AMERICAS 'Argentina', 'Canada', 'Peru', 'USA', 'Honduras', 'Brazil', 'Nicaragua' SALES_PAR_LIST EUROPE 'France', 'Spain', 'Ireland', 'Germany', 'Belgium', 'Portugal', 'Denmark' Now split the Americas partition into 3 partitions – SQL alter table sales_par_list split partition americas into (partition south_america values ('Argentina','Peru','Brazil'), partition north_america values('Canada','USA'), partition central_america); Table altered. Note that no list of values was given for the ‘Central America’ partition. However it should have inherited any values in the original ‘Americas’ partition that were not assigned to either the ‘North America’ or ‘South America’ partitions. We can confirm this by looking at the DBA_TAB_PARTITIONS view. SQL select table_name,partition_name, high_value from dba_tab_partitions where table_owner='SH' and table_name = 'SALES_PAR_LIST'; TABLE_NAME PARTITION_NAME HIGH_VALUE --------------- --------------- -------------------------------- SALES_PAR_LIST SOUTH_AMERICA 'Argentina', 'Peru', 'Brazil' SALES_PAR_LIST NORTH_AMERICA 'Canada', 'USA' SALES_PAR_LIST CENTRAL_AMERICA 'Honduras', 'Nicaragua' SALES_PAR_LIST EUROPE 'France', 'Spain', 'Ireland', 'Germany', 'Belgium', 'Portugal', 'Denmark' In conclusion, I hope that DBA’s whose work involves maintaining partitions will find the operations a bit more straight forward to carry out once they have upgraded to Oracle Database 12c. Gwen Lazenby is a Principal Training Consultant at Oracle. She is part of Oracle University's Core Technology delivery team based in the UK, teaching Database Administration and Linux courses. Her specialist topics include using Oracle Partitioning and Parallelism in Data Warehouse environments, as well as Oracle Spatial and RMAN.

    Read the article

  • Android, how important is deltaTime?

    - by iQue
    Im making a game that is getting pretty big and sometimes my thread has to skip a frame, so far I'm not using deltaTime for setting the speed of my different objects in the game because it's still not a big enough game for it to matter imo. But its getting bigger then I planned, so my question is, how important is delta Time? If I should use delta time there is a problem, since speedX and speedY are integers(they have to be for eclipse to let you make a rectangle of them), I cant add delta time very functionally as far as I understand, but might be wrong? Ive tried adding deltaTime to the code below, and sometimes my enemies just not move after spawn, they just stand there and run in the same place Will add an some code for how I set / use speed: public void update(int dx, int dy) { double theta = 180.0 / Math.PI * Math.atan2(-(y - controls.pointerPosition.y), controls.pointerPosition.x - x); x +=dx * Math.cos(Math.toRadians(theta)); y +=dy * Math.sin(Math.toRadians(theta)); currentFrame = ++currentFrame % BMP_COLUMNS; } public void draw(Canvas canvas) { int srcX = currentFrame * width; int srcY = 1 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); } So if someone with some experience with this has any thoughts, please share. Thank you! Changed code: public void update(int dx, int dy, float delta) { double theta = 180.0 / Math.PI * Math.atan2(-(y - controls.pointerPosition.y), controls.pointerPosition.x - x); double speedX = delta * dx * Math.cos(Math.toRadians(theta)); double speedY = delta * dy * Math.sin(Math.toRadians(theta)); x += speedX; y += speedY; currentFrame = ++currentFrame % BMP_COLUMNS; } public void draw(Canvas canvas) { int srcX = currentFrame * width; int srcY = 1 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); } with this code my enemies move like before, except they wont move to the right (wont increment x), all other directions work.

    Read the article

  • Ball bouncing at a certain angle and efficiency computations

    - by X Y
    I would like to make a pong game with a small twist (for now). Every time the ball bounces off one of the paddles i want it to be under a certain angle (between a min and a max). I simply can't wrap my head around how to actually do it (i have some thoughts and such but i simply cannot implement them properly - i feel i'm overcomplicating things). Here's an image with a small explanation . One other problem would be that the conditions for bouncing have to be different for every edge. For example, in the picture, on the two small horizontal edges i do not want a perfectly vertical bounce when in the middle of the edge but rather a constant angle (pi/4 maybe) in either direction depending on the collision point (before the middle of the edge, or after). All of my collisions are done with the Separating Axes Theorem (and seem to work fine). I'm looking for something efficient because i want to add a lot of things later on (maybe polygons with many edges and such). So i need to keep to a minimum the amount of checking done every frame. The collision algorithm begins testing whenever the bounding boxes of the paddle and the ball intersect. Is there something better to test for possible collisions every frame? (more efficient in the long run,with many more objects etc, not necessarily easy to code). I'm going to post the code for my game: Paddle Class public class Paddle : Microsoft.Xna.Framework.DrawableGameComponent { #region Private Members private SpriteBatch spriteBatch; private ContentManager contentManager; private bool keybEnabled; private bool isLeftPaddle; private Texture2D paddleSprite; private Vector2 paddlePosition; private float paddleSpeedY; private Vector2 paddleScale = new Vector2(1f, 1f); private const float DEFAULT_Y_SPEED = 150; private Vector2[] Normals2Edges; private Vector2[] Vertices = new Vector2[4]; private List<Vector2> lst = new List<Vector2>(); private Vector2 Edge; #endregion #region Properties public float Speed { get {return paddleSpeedY; } set { paddleSpeedY = value; } } public Vector2[] Normal2EdgesVector { get { NormalsToEdges(this.isLeftPaddle); return Normals2Edges; } } public Vector2[] VertexVector { get { return Vertices; } } public Vector2 Scale { get { return paddleScale; } set { paddleScale = value; NormalsToEdges(this.isLeftPaddle); } } public float X { get { return paddlePosition.X; } set { paddlePosition.X = value; } } public float Y { get { return paddlePosition.Y; } set { paddlePosition.Y = value; } } public float Width { get { return (Scale.X == 1f ? (float)paddleSprite.Width : paddleSprite.Width * Scale.X); } } public float Height { get { return ( Scale.Y==1f ? (float)paddleSprite.Height : paddleSprite.Height*Scale.Y ); } } public Texture2D GetSprite { get { return paddleSprite; } } public Rectangle Boundary { get { return new Rectangle((int)paddlePosition.X, (int)paddlePosition.Y, (int)this.Width, (int)this.Height); } } public bool KeyboardEnabled { get { return keybEnabled; } } #endregion private void NormalsToEdges(bool isLeftPaddle) { Normals2Edges = null; Edge = Vector2.Zero; lst.Clear(); for (int i = 0; i < Vertices.Length; i++) { Edge = Vertices[i + 1 == Vertices.Length ? 0 : i + 1] - Vertices[i]; if (Edge != Vector2.Zero) { Edge.Normalize(); //outer normal to edge !! (origin in top-left) lst.Add(new Vector2(Edge.Y, -Edge.X)); } } Normals2Edges = lst.ToArray(); } public float[] ProjectPaddle(Vector2 axis) { if (Vertices.Length == 0 || axis == Vector2.Zero) return (new float[2] { 0, 0 }); float min, max; min = Vector2.Dot(axis, Vertices[0]); max = min; for (int i = 1; i < Vertices.Length; i++) { float p = Vector2.Dot(axis, Vertices[i]); if (p < min) min = p; else if (p > max) max = p; } return (new float[2] { min, max }); } public Paddle(Game game, bool isLeftPaddle, bool enableKeyboard = true) : base(game) { contentManager = new ContentManager(game.Services); keybEnabled = enableKeyboard; this.isLeftPaddle = isLeftPaddle; } public void setPosition(Vector2 newPos) { X = newPos.X; Y = newPos.Y; } public override void Initialize() { base.Initialize(); this.Speed = DEFAULT_Y_SPEED; X = 0; Y = 0; NormalsToEdges(this.isLeftPaddle); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); paddleSprite = contentManager.Load<Texture2D>(@"Content\pongBar"); } public override void Update(GameTime gameTime) { //vertices array Vertices[0] = this.paddlePosition; Vertices[1] = this.paddlePosition + new Vector2(this.Width, 0); Vertices[2] = this.paddlePosition + new Vector2(this.Width, this.Height); Vertices[3] = this.paddlePosition + new Vector2(0, this.Height); // Move paddle, but don't allow movement off the screen if (KeyboardEnabled) { float moveDistance = Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState newKeyState = Keyboard.GetState(); if (newKeyState.IsKeyDown(Keys.Down) && Y + paddleSprite.Height + moveDistance <= Game.GraphicsDevice.Viewport.Height) { Y += moveDistance; } else if (newKeyState.IsKeyDown(Keys.Up) && Y - moveDistance >= 0) { Y -= moveDistance; } } else { if (this.Y + this.Height > this.GraphicsDevice.Viewport.Height) { this.Y = this.Game.GraphicsDevice.Viewport.Height - this.Height - 1; } } base.Update(gameTime); } public override void Draw(GameTime gameTime) { spriteBatch.Begin(SpriteSortMode.Texture,null); spriteBatch.Draw(paddleSprite, paddlePosition, null, Color.White, 0f, Vector2.Zero, Scale, SpriteEffects.None, 0); spriteBatch.End(); base.Draw(gameTime); } } Ball Class public class Ball : Microsoft.Xna.Framework.DrawableGameComponent { #region Private Members private SpriteBatch spriteBatch; private ContentManager contentManager; private const float DEFAULT_SPEED = 50; private float speedIncrement = 0; private Vector2 ballScale = new Vector2(1f, 1f); private const float INCREASE_SPEED = 50; private Texture2D ballSprite; //initial texture private Vector2 ballPosition; //position private Vector2 centerOfBall; //center coords private Vector2 ballSpeed = new Vector2(DEFAULT_SPEED, DEFAULT_SPEED); //speed #endregion #region Properties public float DEFAULTSPEED { get { return DEFAULT_SPEED; } } public Vector2 ballCenter { get { return centerOfBall; } } public Vector2 Scale { get { return ballScale; } set { ballScale = value; } } public float SpeedX { get { return ballSpeed.X; } set { ballSpeed.X = value; } } public float SpeedY { get { return ballSpeed.Y; } set { ballSpeed.Y = value; } } public float X { get { return ballPosition.X; } set { ballPosition.X = value; } } public float Y { get { return ballPosition.Y; } set { ballPosition.Y = value; } } public Texture2D GetSprite { get { return ballSprite; } } public float Width { get { return (Scale.X == 1f ? (float)ballSprite.Width : ballSprite.Width * Scale.X); } } public float Height { get { return (Scale.Y == 1f ? (float)ballSprite.Height : ballSprite.Height * Scale.Y); } } public float SpeedIncreaseIncrement { get { return speedIncrement; } set { speedIncrement = value; } } public Rectangle Boundary { get { return new Rectangle((int)ballPosition.X, (int)ballPosition.Y, (int)this.Width, (int)this.Height); } } #endregion public Ball(Game game) : base(game) { contentManager = new ContentManager(game.Services); } public void Reset() { ballSpeed.X = DEFAULT_SPEED; ballSpeed.Y = DEFAULT_SPEED; ballPosition.X = Game.GraphicsDevice.Viewport.Width / 2 - ballSprite.Width / 2; ballPosition.Y = Game.GraphicsDevice.Viewport.Height / 2 - ballSprite.Height / 2; } public void SpeedUp() { if (ballSpeed.Y < 0) ballSpeed.Y -= (INCREASE_SPEED + speedIncrement); else ballSpeed.Y += (INCREASE_SPEED + speedIncrement); if (ballSpeed.X < 0) ballSpeed.X -= (INCREASE_SPEED + speedIncrement); else ballSpeed.X += (INCREASE_SPEED + speedIncrement); } public float[] ProjectBall(Vector2 axis) { if (axis == Vector2.Zero) return (new float[2] { 0, 0 }); float min, max; min = Vector2.Dot(axis, this.ballCenter) - this.Width/2; //center - radius max = min + this.Width; //center + radius return (new float[2] { min, max }); } public void ChangeHorzDirection() { ballSpeed.X *= -1; } public void ChangeVertDirection() { ballSpeed.Y *= -1; } public override void Initialize() { base.Initialize(); ballPosition.X = Game.GraphicsDevice.Viewport.Width / 2 - ballSprite.Width / 2; ballPosition.Y = Game.GraphicsDevice.Viewport.Height / 2 - ballSprite.Height / 2; } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); ballSprite = contentManager.Load<Texture2D>(@"Content\ball"); } public override void Update(GameTime gameTime) { if (this.Y < 1 || this.Y > GraphicsDevice.Viewport.Height - this.Height - 1) this.ChangeVertDirection(); centerOfBall = new Vector2(ballPosition.X + this.Width / 2, ballPosition.Y + this.Height / 2); base.Update(gameTime); } public override void Draw(GameTime gameTime) { spriteBatch.Begin(); spriteBatch.Draw(ballSprite, ballPosition, null, Color.White, 0f, Vector2.Zero, Scale, SpriteEffects.None, 0); spriteBatch.End(); base.Draw(gameTime); } } Main game class public class gameStart : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; public gameStart() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; this.Window.Title = "Pong game"; } protected override void Initialize() { ball = new Ball(this); paddleLeft = new Paddle(this,true,false); paddleRight = new Paddle(this,false,true); Components.Add(ball); Components.Add(paddleLeft); Components.Add(paddleRight); this.Window.AllowUserResizing = false; this.IsMouseVisible = true; this.IsFixedTimeStep = false; this.isColliding = false; base.Initialize(); } #region MyPrivateStuff private Ball ball; private Paddle paddleLeft, paddleRight; private int[] bit = { -1, 1 }; private Random rnd = new Random(); private int updates = 0; enum nrPaddle { None, Left, Right }; private nrPaddle PongBar = nrPaddle.None; private ArrayList Axes = new ArrayList(); private Vector2 MTV; //minimum translation vector private bool isColliding; private float overlap; //smallest distance after projections private Vector2 overlapAxis; //axis of overlap #endregion protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); paddleLeft.setPosition(new Vector2(0, this.GraphicsDevice.Viewport.Height / 2 - paddleLeft.Height / 2)); paddleRight.setPosition(new Vector2(this.GraphicsDevice.Viewport.Width - paddleRight.Width, this.GraphicsDevice.Viewport.Height / 2 - paddleRight.Height / 2)); paddleLeft.Scale = new Vector2(1f, 2f); //scale left paddle } private bool ShapesIntersect(Paddle paddle, Ball ball) { overlap = 1000000f; //large value overlapAxis = Vector2.Zero; MTV = Vector2.Zero; foreach (Vector2 ax in Axes) { float[] pad = paddle.ProjectPaddle(ax); //pad0 = min, pad1 = max float[] circle = ball.ProjectBall(ax); //circle0 = min, circle1 = max if (pad[1] <= circle[0] || circle[1] <= pad[0]) { return false; } if (pad[1] - circle[0] < circle[1] - pad[0]) { if (Math.Abs(overlap) > Math.Abs(-pad[1] + circle[0])) { overlap = -pad[1] + circle[0]; overlapAxis = ax; } } else { if (Math.Abs(overlap) > Math.Abs(circle[1] - pad[0])) { overlap = circle[1] - pad[0]; overlapAxis = ax; } } } if (overlapAxis != Vector2.Zero) { MTV = overlapAxis * overlap; } return true; } protected override void Update(GameTime gameTime) { updates += 1; float ftime = 5 * (float)gameTime.ElapsedGameTime.TotalSeconds; if (updates == 1) { isColliding = false; int Xrnd = bit[Convert.ToInt32(rnd.Next(0, 2))]; int Yrnd = bit[Convert.ToInt32(rnd.Next(0, 2))]; ball.SpeedX = Xrnd * ball.SpeedX; ball.SpeedY = Yrnd * ball.SpeedY; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; } else { updates = 100; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; } //autorun :) paddleLeft.Y = ball.Y; //collision detection PongBar = nrPaddle.None; if (ball.Boundary.Intersects(paddleLeft.Boundary)) { PongBar = nrPaddle.Left; if (!isColliding) { Axes.Clear(); Axes.AddRange(paddleLeft.Normal2EdgesVector); //axis from nearest vertex to ball's center Axes.Add(FORMULAS.NormAxisFromCircle2ClosestVertex(paddleLeft.VertexVector, ball.ballCenter)); } } else if (ball.Boundary.Intersects(paddleRight.Boundary)) { PongBar = nrPaddle.Right; if (!isColliding) { Axes.Clear(); Axes.AddRange(paddleRight.Normal2EdgesVector); //axis from nearest vertex to ball's center Axes.Add(FORMULAS.NormAxisFromCircle2ClosestVertex(paddleRight.VertexVector, ball.ballCenter)); } } if (PongBar != nrPaddle.None && !isColliding) switch (PongBar) { case nrPaddle.Left: if (ShapesIntersect(paddleLeft, ball)) { isColliding = true; if (MTV != Vector2.Zero) ball.X += MTV.X; ball.Y += MTV.Y; ball.ChangeHorzDirection(); } break; case nrPaddle.Right: if (ShapesIntersect(paddleRight, ball)) { isColliding = true; if (MTV != Vector2.Zero) ball.X += MTV.X; ball.Y += MTV.Y; ball.ChangeHorzDirection(); } break; default: break; } if (!ShapesIntersect(paddleRight, ball) && !ShapesIntersect(paddleLeft, ball)) isColliding = false; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; //check ball movement if (ball.X > paddleRight.X + paddleRight.Width + 2) { //IncreaseScore(Left); ball.Reset(); updates = 0; return; } else if (ball.X < paddleLeft.X - 2) { //IncreaseScore(Right); ball.Reset(); updates = 0; return; } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Aquamarine); spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spriteBatch.End(); base.Draw(gameTime); } } And one method i've used: public static Vector2 NormAxisFromCircle2ClosestVertex(Vector2[] vertices, Vector2 circle) { Vector2 temp = Vector2.Zero; if (vertices.Length > 0) { float dist = (circle.X - vertices[0].X) * (circle.X - vertices[0].X) + (circle.Y - vertices[0].Y) * (circle.Y - vertices[0].Y); for (int i = 1; i < vertices.Length;i++) { if (dist > (circle.X - vertices[i].X) * (circle.X - vertices[i].X) + (circle.Y - vertices[i].Y) * (circle.Y - vertices[i].Y)) { temp = vertices[i]; //memorize the closest vertex dist = (circle.X - vertices[i].X) * (circle.X - vertices[i].X) + (circle.Y - vertices[i].Y) * (circle.Y - vertices[i].Y); } } temp = circle - temp; temp.Normalize(); } return temp; } Thanks in advance for any tips on the 4 issues. EDIT1: Something isn't working properly. The collision axis doesn't come out right and the interpolation also seems to have no effect. I've changed the code a bit: private bool ShapesIntersect(Paddle paddle, Ball ball) { overlap = 1000000f; //large value overlapAxis = Vector2.Zero; MTV = Vector2.Zero; foreach (Vector2 ax in Axes) { float[] pad = paddle.ProjectPaddle(ax); //pad0 = min, pad1 = max float[] circle = ball.ProjectBall(ax); //circle0 = min, circle1 = max if (pad[1] < circle[0] || circle[1] < pad[0]) { return false; } if (Math.Abs(pad[1] - circle[0]) < Math.Abs(circle[1] - pad[0])) { if (Math.Abs(overlap) > Math.Abs(-pad[1] + circle[0])) { overlap = -pad[1] + circle[0]; overlapAxis = ax * (-1); } //to get the proper axis } else { if (Math.Abs(overlap) > Math.Abs(circle[1] - pad[0])) { overlap = circle[1] - pad[0]; overlapAxis = ax; } } } if (overlapAxis != Vector2.Zero) { MTV = overlapAxis * Math.Abs(overlap); } return true; } And part of the Update method: if (ShapesIntersect(paddleRight, ball)) { isColliding = true; if (MTV != Vector2.Zero) { ball.X += MTV.X; ball.Y += MTV.Y; } //test if (overlapAxis.X == 0) //collision with horizontal edge { } else if (overlapAxis.Y == 0) //collision with vertical edge { float factor = Math.Abs(ball.ballCenter.Y - paddleRight.Y) / paddleRight.Height; if (factor > 1) factor = 1f; if (overlapAxis.X < 0) //left edge? ball.Speed = ball.DEFAULTSPEED * Vector2.Normalize(Vector2.Reflect(ball.Speed, (Vector2.Lerp(new Vector2(-1, -3), new Vector2(-1, 3), factor)))); else //right edge? ball.Speed = ball.DEFAULTSPEED * Vector2.Normalize(Vector2.Reflect(ball.Speed, (Vector2.Lerp(new Vector2(1, -3), new Vector2(1, 3), factor)))); } else //vertex collision??? { ball.Speed = -ball.Speed; } } What seems to happen is that "overlapAxis" doesn't always return the right one. So instead of (-1,0) i get the (1,0) (this happened even before i multiplied with -1 there). Sometimes there isn't even a collision registered even though the ball passes through the paddle... The interpolation also seems to have no effect as the angles barely change (or the overlapAxis is almost never (-1,0) or (1,0) but something like (0.9783473, 0.02743843)... ). What am i missing here? :(

    Read the article

  • Reinventing the Paged IEnumerable, Weigert Style!

    - by adweigert
    I am pretty sure someone else has done this, I've seen variations as PagedList<T>, but this is my style of a paged IEnumerable collection. I just store a reference to the collection and generate the paged data when the enumerator is needed, so you could technically add to a list that I'm referencing and the properties and results would be adjusted accordingly. I don't mind reinventing the wheel when I can add some of my own personal flare ... // Extension method for easy use public static PagedEnumerable AsPaged(this IEnumerable collection, int currentPage = 1, int pageSize = 0) { Contract.Requires(collection != null); Contract.Assume(currentPage >= 1); Contract.Assume(pageSize >= 0); return new PagedEnumerable(collection, currentPage, pageSize); } public class PagedEnumerable : IEnumerable { public PagedEnumerable(IEnumerable collection, int currentPage = 1, int pageSize = 0) { Contract.Requires(collection != null); Contract.Assume(currentPage >= 1); Contract.Assume(pageSize >= 0); this.collection = collection; this.PageSize = pageSize; this.CurrentPage = currentPage; } IEnumerable collection; int currentPage; public int CurrentPage { get { if (this.currentPage > this.TotalPages) { return this.TotalPages; } return this.currentPage; } set { if (value < 1) { this.currentPage = 1; } else if (value > this.TotalPages) { this.currentPage = this.TotalPages; } else { this.currentPage = value; } } } int pageSize; public int PageSize { get { if (this.pageSize == 0) { return this.collection.Count(); } return this.pageSize; } set { this.pageSize = (value < 0) ? 0 : value; } } public int TotalPages { get { return (int)Math.Ceiling(this.collection.Count() / (double)this.PageSize); } } public IEnumerator GetEnumerator() { var pageSize = this.PageSize; var currentPage = this.CurrentPage; var startCount = (currentPage - 1) * pageSize; return this.collection.Skip(startCount).Take(pageSize).GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } }

    Read the article

  • Full-text Indexing Books Online

    - by Most Valuable Yak (Rob Volk)
    While preparing for a recent SQL Saturday presentation, I was struck by a crazy idea (shocking, I know): Could someone import the content of SQL Server Books Online into a database and apply full-text indexing to it?  The answer is yes, and it's really quite easy to do. The first step is finding the installed help files.  If you have SQL Server 2012, BOL is installed under the Microsoft Help Library.  You can find the install location by opening SQL Server Books Online and clicking the gear icon for the Help Library Manager.  When the new window pops up click the Settings link, you'll get the following: You'll see the path under Library Location. Once you navigate to that path you'll have to drill down a little further, to C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store.  This is where the help file content is kept if you downloaded it for offline use. Depending on which products you've downloaded help for, you may see a few hundred files.  Fortunately they're named well and you can easily find the "SQL_Server_Denali_Books_Online_" files.  We are interested in the .MSHC files only, and can skip the Installation and Developer Reference files. Despite the .MHSC extension, these files are compressed with the standard Zip format, so your favorite archive utility (WinZip, 7Zip, WinRar, etc.) can open them.  When you do, you'll see a few thousand files in the archive.  We are only interested in the .htm files, but there's no harm in extracting all of them to a folder.  7zip provides a command-line utility and the following will extract to a D:\SQLHelp folder previously created: 7z e –oD:\SQLHelp "C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store\SQL_Server_Denali_Books_Online_B780_SQL_110_en-us_1.2.mshc" *.htm Well that's great Rob, but how do I put all those files into a full-text index? I'll tell you in a second, but first we have to set up a few things on the database side.  I'll be using a database named Explore (you can certainly change that) and the following setup is a fragment of the script I used in my presentation: USE Explore; GO CREATE SCHEMA help AUTHORIZATION dbo; GO -- Create default fulltext catalog for later FT indexes CREATE FULLTEXT CATALOG FTC AS DEFAULT; GO CREATE TABLE help.files(file_id int not null IDENTITY(1,1) CONSTRAINT PK_help_files PRIMARY KEY, path varchar(256) not null CONSTRAINT UNQ_help_files_path UNIQUE, doc_type varchar(6) DEFAULT('.xml'), content varbinary(max) not null); CREATE FULLTEXT INDEX ON help.files(content TYPE COLUMN doc_type LANGUAGE 1033) KEY INDEX PK_help_files; This will give you a table, default full-text catalog, and full-text index on that table for the content you're going to insert.  I'll be using the command line again for this, it's the easiest method I know: for %a in (D:\SQLHelp\*.htm) do sqlcmd -S. -E -d Explore -Q"set nocount on;insert help.files(path,content) select '%a', cast(c as varbinary(max)) from openrowset(bulk '%a', SINGLE_CLOB) as c(c)" You'll need to copy and run that as one line in a command prompt.  I'll explain what this does while you run it and watch several thousand files get imported: The "for" command allows you to loop over a collection of items.  In this case we want all the .htm files in the D:\SQLHelp folder.  For each file it finds, it will assign the full path and file name to the %a variable.  In the "do" clause, we'll specify another command to be run for each iteration of the loop.  I make a call to "sqlcmd" in order to run a SQL statement.  I pass in the name of the server (-S.), where "." represents the local default instance. I specify -d Explore as the database, and -E for trusted connection.  I then use -Q to run a query that I enclose in double quotes. The query uses OPENROWSET(BULK…SINGLE_CLOB) to open the file as a data source, and to treat it as a single character large object.  In order for full-text indexing to work properly, I have to convert the text content to varbinary. I then INSERT these contents along with the full path of the file into the help.files table created earlier.  This process continues for each file in the folder, creating one new row in the table. And that's it! 5 SQL Statements and 2 command line statements to unzip and import SQL Server Books Online!  In case you're wondering why I didn't use FILESTREAM or FILETABLE, it's simply because I haven't learned them…yet. I may return to this blog after I figure that out and update it with the steps to do so.  I believe that will make it even easier. In the spirit of exploration, I'll leave you to work on some fulltext queries of this content.  I also recommend playing around with the sys.dm_fts_xxxx DMVs (I particularly like sys.dm_fts_index_keywords, it's pretty interesting).  There are additional example queries in the download material for my presentation linked above. Many thanks to Kevin Boles (t) for his advice on (re)checking the content of the help files.  Don't let that .htm extension fool you! The 2012 help files are actually XML, and you'd need to specify '.xml' in your document type column in order to extract the full-text keywords.  (You probably noticed this in the default definition for the doc_type column.)  You can query sys.fulltext_document_types to get a complete list of the types that can be full-text indexed. I also need to thank Hilary Cotter for giving me the original idea. I believe he used MSDN content in a full-text index for an article from waaaaaaaaaaay back, that I can't find now, and had forgotten about until just a few days ago.  He is also co-author of Pro Full-Text Search in SQL Server 2008, which I highly recommend.  He also has some FTS articles on Simple Talk: http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features/ http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features,-part-2/

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • Crash Report in Ubuntu... hardware problem?

    - by Andrew
    Got this on my machine. I was just browsing the web on Chrome and my computer froze. I recently just built this machine. I have a feeling it is a hardware problem... Possibly one of my parts arrived broken in some way.... Starting anac(h)ronistic cron Stopping anac(h)ronistic cron Stopping cold plug devices Stopping log initial device creation Starting enable remaining boot-time encrypted block devices Starting configure network device security Starting configure virtual network devices Starting save udev log and update rules Stopping configure virtual network devices Stopping save udev log and update rules Checking battery state... Stopping System V runlevel compatibility Stopping enable remaining boot-time encrypted block devices Stopping Mount filesystems on boot 91.573384] BUG: unable to handle kernel NULL pointer dereference at (null) 91.573437] IP: [<ffffffff81313514>] strcmp+0x14/0x30 91.573470] PGD 1f7822067 PUD 1ed7a6067 PMD 0 91.573498] Oops: 0000 [#1] SMP 91.573519] CPU 3 91.573531] Modules linked in: dm_crypt bnep snd_hda_codec_realtek rfcomm bluetooth parport_pc ppdev arc4 fglrx(P) rt2800usb rt2800lib crc_ccitt rt2x00usb rt2x00lib mac0021 cfg80211 psmouse snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer send_seq_device snd joydev mac_hid mei(C) soundcore serio_raw snd_page_alloc lp parport ses enclosure usbhid hid i915 drm_kms_helper drm i2c_algo_bit mxm_umi tg_video wmi usb_storage 91.573826] 91.573837] Pid: 2297, comm: update-notifier Tainted: P C O 3.2.0-29-generic #46-Ubuntu To Be Filled By O.E.M. To Be Filled By O.E.M./Z77 Extreme4 91.573912] RIP: 0010:[<ffffffff81313514>] [<ffffffff81313514>] strcmp+0x14/0x30 91.573954] RSP: 0018:ffff8801f83f5bb8 EFLAGS: 00010246 91.573982] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 91.574019] RDX: 0000000000000069 RSI: 0000000000000000 RDI: ffff88021adb26f8 91.574056] RBP: ffff8801f83f5bb8 R08: ffff88022f2d6e80 R09: 0000000000000000 91.574093] R10: ffff88021e7dbf00 R11: 0000000000000003 R12: ffff88021c10eb40 91.574130] R13: 0000000000000000 R14: ffff88021adb26f8 R15: ffff8801f83f5d40 91.574168] FS: 00007f958cf53940(0000) GS:ffff88022f2c0000(0000) kn1GS:0000000000000000 91.574210] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 91.574240] CR2: 0000000000000000 CR3: 000000021f6d7000 CR4: 00000000000406e0 91.574277] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 91.574314] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000000 91.574351] Process update-notifier (pid: 2297, threadinfo ffff801f83f4000, task ffff880208fe2e00) 91.574397] Stack: 91.574409] ffff8801f83f5be8 ffffffff811ed509 ffff88021adb26c0 ffff88021b8b7020 91.574453] ffff88021b461c60 fffffffffffffffe ffff8801f83f5c18 ffffffff811ed61f 91.574496] ffff88021adb26c0 ffff88021b8b7020 ffff8801f83f5dc8 0000000000000001 91.574539] Call Trace: 91.574558] [<ffffffff811ed509] sysfs_find_dirent+0x59/0x110 91.574591] [<ffffffff811ed61f] sysfs_lookup+0x5f/0x110 91.574621] [<ffffffff81182745] d_alloc_and_lookup+0x45/0x90 91.574654] [<ffffffff8118fe65] ? d_lookup+0x35/0x60 91.574683] [<ffffffff811848d2] do_lookup+0x202/0x310 91.574712] [<ffffffff8118660c] path_lookupat+0x11c/0x750 91.574744] [<ffffffff81318db7] ? __strncpy_from_user+0x27/0x60 91.574778] [<ffffffff81186c71] do_path_lookup+0x31/0xc0 91.574809] [<ffffffff81187779] user_path_at_empty+0x59/0xa0 91.574842] [<ffffffff81187822] ? do_filp_open+0x42/0xa0 91.574872] [<ffffffff811877d1] user_path_at+0x11/0x20 91.574902] [<ffffffff8117c80a] vfs_fstatat+0x3a/0x70 91.574933] [<ffffffff81161cff] ? kmem_cache_free+0x2f/0x110 91.574965] [<ffffffff8117c85e] vfs_lstat+-x31/0x70 91.574993] [<ffffffff8117c9fa] sys_newlstat+0x1a/0x40 91.575022] [<ffffffff81176ee1] ? do_sys_open+0x171/0x220 91.575053] [<ffffffff8117cb1a] ? sys_readlinkat+0x7a/0xb0 91.575086] [<ffffffff81661ec2] system_call_fastpath+0x16/0x1b 91.575118] Code: 83 c1 01 40 84 ff 75 ef 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 00 55 31 c0 48 89 e5 66 2e 0f 1f 84 00 00 00 00 00 0f b6 14 07 <3a> 14 06 75 0f 48 83 c0 01 84 d2 75 ef 31 c0 5d c3 0f 1f 00 19 91.577243] RIP [<ffffffff81313514>] strcmp+0x14/0x30 91.579314] RSP <ffff8801f83f5bb8> 91.581385] CR2: 0000000000000000

    Read the article

  • List columns where collation doesn't match database collation

    - by TiborKaraszi
    Below script lists all database/table/column where the column collation doesn't match the database collation. I just wrote it for a migration project and thought I'd share it. I'm sure lots of tings can be improved, but below worked just fine for me for a one-time execution on a number of servers. IF OBJECT_ID ( 'tempdb..#res' ) IS NOT NULL DROP TABLE #res GO DECLARE @db sysname , @sql nvarchar ( 2000 ) CREATE TABLE #res ( server_name sysname , db_name sysname , db_collation sysname , table_name...(read more)

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >