Search Results

Search found 2503 results on 101 pages for 'destination'.

Page 58/101 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • RAID1: Which disk will be mirrored?

    - by tmelen
    How does a RAID1 system determine which disk to use as the source and which disk to use as the destination when mirroring? Assume for instance the following scenario: A RAID1 array is created with two disks A and B. A is replaced by disk C, which is added to the array. Files are beeing modified as time goes by. Now B is removed and A is reinserted. Will the RAID1 system realize that A and C are out of sync? And that C is more up-to-date than A? And if not, is there a safe way to avoid the mirroring process to start immediately when disk A is inserted?

    Read the article

  • VPN on PC vs Mac

    - by allstar
    I am trying to connect to a VPN from my home computer, as opposed to my work computer which already has the network info set up. I have received instructions on connecting from a mac, but since I don't have that I'm trying to do the equivalent on my PC. I know the: server group name secret and my own login account and password Using the Windows 7 VPN, there's space for: Internet address destination name user name password domain (optional) I'm trying to determine what's what. I assume the internet address is the server. I've tried using the "secret" as the Password, b/c i'd think the first part is connecting to the VPN as opposed to logging in. It still wants a user name though. I tried mine, I tried the "group name". I would appreciate your help with this. Thanks!

    Read the article

  • iso9660 filesystem when remade with a slight change blows its size by 100M

    - by user1458001
    I have an iso 9660 filesystem image in which I need to edit just one file. I copied the files using cp -avf. When the files reach the destination, the sizes increase. That must be due to the increase in block size. But when I remake the iso9660 filesystem using mkisofs -J -U -r the sizes of the files remain the same and just a small editing in a file leads to a blow up of about 100M in the newly created iso image. I think I'm missing some option there, but I'm not able to find out in the manpage and google search. Some quick help would be greatly appreciated as I'm stuck. My host filesystem is ext3..if that's required.

    Read the article

  • How can I push a git repository to a folder over SSH?

    - by Rich
    I have a folder called my-project inside which I've done git init, git commit -a, etc. Now I want to push it to an empty folder at /mnt/foo/bar on a remote server. How can I do this? I did try, based on what I'd read: cd my-project git remote add origin ssh://user@host/mnt/foo/bar/my-project.git git push origin master which didn't seem right (I'd assume source would come before destination) and it failed: fatal: '/mnt/boxee/git/midwinter-physiotherapy.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly I'd like this to work such that I don't have to access the remote host and manually init a git repository every time ... do I have to do that? Am I going down the right route at all? Thanks.

    Read the article

  • Wireshark TCP Window Size Value

    - by T Vernon
    I am debugging an application with Wireshark and watching the TCP Window Size value shrink on one side of the communication. If the packet's TCP section shows a "Window size value: 1", does that mean the source's window size is 1 or the destination's window size is 1? I know one side is communicating faster than the other can handle, I just want be sure I know which one it is. 1 192.168.0.1 - 192.168.0.100, Modbus/TCP, Length: 66, Window Size Value: 1 2 192.168.0.100 - 192.168.0.1, TCP, Length: 60, Window Size Value: 92 3 192.168.0.100 - 192.168.0.1 TCP, Length: 310, Window Size Value: 92 4 192.168.0.1 - 192.168.0.100 TCP, Length: 54, Window Size Value: 0 So is 192.168.0.1's window size 0 or is it reporting that 192.168.0.100's window is 0? Thanks.

    Read the article

  • What is the best plan to handle server fault for google app engine [closed]

    - by lucemia
    I used google-appengine without preparing much backup plans before, but it looks like not a good idea anymore.... Since google app engine is quite hard to find a backup replacement, I plan to just add a "server error" page which will show while server fault. Currently I am thinking to: Use the cdn cloudfare in front of google app engine. It will also handle the NAME server for me. Prepare some static version of webpages (such as "Oops! the server fault") in another hosting platform While google app engine failed, I will switch the destination from google app engine to the static page by change the CNAME records on cloudfare. Is there any other recommand way to solve this situation?

    Read the article

  • Werid formating in Word 2010

    - by Stat-R
    A few months ago, while writing a paper, I copied some paragraphs created in another computer into a different computer. I guess the formatting was different. Please see the following image: I noticed that a strange formatting has also been imported. I thought it would go away when I select all and choose a format. But the problem did not go away. Now, when I am trying to finish the paper, the weird formatting still remaining. Does anyone have any solution? Also, how to make sure that when we copy something from a file with different Styles, we retain the destination style definitions. EDIT I would prefer a solution where I do not have to re-do the formatting manually.

    Read the article

  • Weird formatting in Word 2010

    - by Stat-R
    A few months ago while writing a paper, I copied some paragraphs created in one computer to a different computer. I guess the formatting was different. Please see the following image: I noticed that a strange formatting has also been imported. I thought it would go away when I select all and choose a format. But the problem did not go away. Now, when I am trying to finish the paper, the weird formatting still remains. Does anyone have any solution? Also, how to make sure that when we copy something from a file with different Styles, we retain the destination style definitions? EDIT I would prefer a solution where I do not have to re-do the formatting manually.

    Read the article

  • Excel Macro Runtime error 428 in Excel 2003

    - by Adam
    Hi I have created a xlt excel template which works fine in Excel 2007 under compatibility mode and shows no errors on compatibility check. The template runs a number of Macros which creates pivot tables and charts. When a colleague tries to run the same xlt on excel 2003 they get a Runtime error 428 (Object does not support this property or method). The runtime error fails at this point; ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R7C1", TableName:="PivotTable2", _ DefaultVersion:=xlPivotTableVersion10 Any help would be appreciated. This is the full Macro; Sub Auto_Open() ' ' ImportData Macro ' Macro to import data, Data must be in your local D: Drive and named raw.csv ' ' Sheets("raw").Select With ActiveSheet.QueryTables.Add(Connection:= _ "TEXT;d:\raw.csv", Destination:=Range _ ("$A$1")) .Name = "raw_1" .FieldNames = True .RowNumbers = False .FillAdjacentFormulas = False .PreserveFormatting = True .RefreshOnFileOpen = False .RefreshStyle = xlInsertDeleteCells .SavePassword = False .SaveData = True .AdjustColumnWidth = True .RefreshPeriod = 0 .TextFilePromptOnRefresh = False .TextFilePlatform = 850 .TextFileStartRow = 1 .TextFileParseType = xlDelimited .TextFileTextQualifier = xlTextQualifierDoubleQuote .TextFileConsecutiveDelimiter = False .TextFileTabDelimiter = False .TextFileSemicolonDelimiter = False .TextFileCommaDelimiter = True .TextFileSpaceDelimiter = False .TextFileColumnDataTypes = Array(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, _ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) .TextFileTrailingMinusNumbers = True .Refresh BackgroundQuery:=False End With ' ' AddMonthColumn Macro ' ' Sheets("raw").Select Range("AK1").Select ActiveCell.FormulaR1C1 = "Month" Range("AK2").FormulaR1C1 = "=DATE(YEAR(RC[-36]),MONTH(RC[-36]),1)" LastRow = ActiveSheet.UsedRange.Rows.Count Range("AK2").AutoFill Destination:=Range("AK2:AK" & LastRow) Columns("AK:AK").EntireColumn.AutoFit Columns("AK:AK").Select Selection.NumberFormat = "mmmm" With Selection .HorizontalAlignment = xlCenter End With Columns("AK:AK").EntireColumn.AutoFit Selection.Copy Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks _ :=False, Transpose:=False ' ' Add Report Information [Text] ' Sheets("Frontpage").Select Range("A2:N2").Select Selection.Merge ActiveCell.FormulaR1C1 = "Service Activity Report" With Selection.Font .Size = 20 End With Range("A3:N3").Select Selection.Merge ActiveCell.FormulaR1C1 = InputBox("Customer Name") With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter End With Range("A4:N4").Select Selection.Merge ActiveCell.FormulaR1C1 = InputBox("Date Range dd/mm/yyyy - dd/mm/yyyy") With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter End With ' ' IncidentsbyPriority Macro ' ' Sheets("Frontpage").Select Range("A7").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R7C1", TableName:="PivotTable2", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(7, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$7:$H$22") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable2").PivotFields("Priority") .Orientation = xlRowField .Position = 1 End With ActiveSheet.PivotTables("PivotTable2").AddDataField ActiveSheet.PivotTables( _ "PivotTable2").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentsbyPriority" ActiveChart.ChartTitle.Text = "Incidents by Priority" Dim RngToCover As Range Dim ChtOb As ChartObject Set RngToCover = ActiveSheet.Range("D7:L16") Set ChtOb = ActiveSheet.ChartObjects("IncidentsbyPriority") ChtOb.Height = RngToCover.Height ' resize ChtOb.Width = RngToCover.Width ' resize ChtOb.Top = RngToCover.Top ' reposition ChtOb.Left = RngToCover.Left ' reposition ' ' IncidentbyMonth Macro ' ' Sheets("Frontpage").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R18C1", TableName:="PivotTable4", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(18, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$18:$H$38") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable4").PivotFields("Month") .Orientation = xlRowField .Position = 1 End With ActiveSheet.PivotTables("PivotTable4").AddDataField ActiveSheet.PivotTables( _ "PivotTable4").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbyMonth" ActiveChart.ChartTitle.Text = "Incidents by Month" Dim RngToCover2 As Range Dim ChtOb2 As ChartObject Set RngToCover2 = ActiveSheet.Range("D18:L30") Set ChtOb2 = ActiveSheet.ChartObjects("IncidentbyMonth") ChtOb2.Height = RngToCover2.Height ' resize ChtOb2.Width = RngToCover2.Width ' resize ChtOb2.Top = RngToCover2.Top ' reposition ChtOb2.Left = RngToCover2.Left ' reposition ' ' IncidentbyCategory Macro ' ' Sheets("Frontpage").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R38C1", TableName:="PivotTable6", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(38, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$38:$H$119") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable6").PivotFields("Category 2") .Orientation = xlRowField .Position = 1 End With With ActiveSheet.PivotTables("PivotTable6").PivotFields("Category 3") .Orientation = xlPageField .Position = 1 End With ActiveSheet.PivotTables("PivotTable6").AddDataField ActiveSheet.PivotTables( _ "PivotTable6").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbyCategory" ActiveChart.ChartTitle.Text = "Incidents by Category" Dim RngToCover3 As Range Dim ChtOb3 As ChartObject Set RngToCover3 = ActiveSheet.Range("D38:L56") Set ChtOb3 = ActiveSheet.ChartObjects("IncidentbyCategory") ChtOb3.Height = RngToCover3.Height ' resize ChtOb3.Width = RngToCover3.Width ' resize ChtOb3.Top = RngToCover3.Top ' reposition ChtOb3.Left = RngToCover3.Left ' reposition ' ' IncidentsbySiteandPriority Macro ' ' Sheets("Frontpage").Select Range("A71").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R71C1", TableName:="PivotTable3", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(71, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$71:$H$90") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable3").PivotFields("Site Name") .Orientation = xlRowField .Position = 1 End With With ActiveSheet.PivotTables("PivotTable3").PivotFields("Priority") .Orientation = xlColumnField .Position = 1 End With ActiveSheet.PivotTables("PivotTable3").AddDataField ActiveSheet.PivotTables( _ "PivotTable3").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbySiteandPriority" ' ActiveChart.ChartTitle.Text = "Incidents by Site and Priority" Dim RngToCover4 As Range Dim ChtOb4 As ChartObject Set RngToCover4 = ActiveSheet.Range("H71:O91") Set ChtOb4 = ActiveSheet.ChartObjects("IncidentbySiteandPriority") ChtOb4.Height = RngToCover4.Height ' resize ChtOb4.Width = RngToCover4.Width ' resize ChtOb4.Top = RngToCover4.Top ' reposition ChtOb4.Left = RngToCover4.Left ' reposition Columns("A:G").Select Range("A52").Activate Columns("A:G").EntireColumn.AutoFit End Sub

    Read the article

  • Craziest JavaScript behavior I've ever seen

    - by Dan Ray
    And that's saying something. This is based on the Google Maps sample for Directions in the Maps API v3. <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no"/> <meta http-equiv="content-type" content="text/html; charset=UTF-8"/> <title>Google Directions</title> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script type="text/javascript"> var directionDisplay; var directionsService = new google.maps.DirectionsService(); var map; function initialize() { directionsDisplay = new google.maps.DirectionsRenderer(); var myOptions = { zoom:7, mapTypeId: google.maps.MapTypeId.ROADMAP } map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); directionsDisplay.setMap(map); directionsDisplay.setPanel(document.getElementById("directionsPanel")); } function render() { var start; if(navigator.geolocation) { navigator.geolocation.getCurrentPosition(function(position) { start = new google.maps.LatLng(position.coords.latitude,position.coords.longitude); }, function() { handleNoGeolocation(browserSupportFlag); }); } else { // Browser doesn't support Geolocation handleNoGeolocation(); } alert("booga booga"); var end = '<?= $_REQUEST['destination'] ?>'; var request = { origin:start, destination:end, travelMode: google.maps.DirectionsTravelMode.DRIVING }; directionsService.route(request, function(response, status) { if (status == google.maps.DirectionsStatus.OK) { directionsDisplay.setDirections(response); } }); } </script> </head> <body style="margin:0px; padding:0px;" onload="initialize()"> <div><div id="map_canvas" style="float:left;width:70%; height:100%"></div> <div id="directionsPanel" style="float:right;width:30%;height 100%"></div> <script type="text/javascript">render();</script> </body> </html> See that "alert('booga booga')" in there? With that in place, this all works fantastic. Comment that out, and var start is undefined when we hit the line to define var request. I discovered this when I removed the alert I put in there to show me the value of var start, and it quit working. If I DO ask it to alert me the value of var start, it tells me it's undefined, BUT it has a valid (and accurate!) value when we define var request a few lines later. I'm suspecting it's a timing issue--like an asynchronous something is having time to complete in the background in the moment it takes me to dismiss the alert. Any thoughts on work-arounds?

    Read the article

  • Java CORBA Client Disconnects Immediately

    - by Benny
    I have built a Java CORBA application that subscribes to an event server. The application narrows and logs on just fine, but as soon as an event is sent to the client, it breaks with the error below. Please advise. 2010/04/25!13.00.00!E00555!enserver!EventServiceIF_i.cpp!655!PID(7390)!enserver - e._info=system exception, ID 'IDL:omg.org/CORBA/TRANSIENT:1.0' TAO exception, minor code = 54410093 (invocation connect failed; ECONNRESET), completed = NO EDIT: Please note, this only happens when running on some machines. It works on some, but not others. Even on the same platform (I've tried Windows XP/7 and CentOS linux) Some work, some don't... Here is the WireShark output...looks like the working PC is much more interactive with the network compared to the non-working PC. Working PC No. Time Source Destination Protocol Info 62 28.837255 10.10.10.209 10.10.10.250 TCP 50169 > 23120 [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=8 63 28.907068 fe80::5de0:8d21:937e:c649 ff02::1:3 LLMNR Standard query A isatap 64 28.907166 10.10.10.209 224.0.0.252 LLMNR Standard query A isatap 65 29.107259 10.10.10.209 10.255.255.255 NBNS Name query NB ISATAP<00> 66 29.227000 10.10.10.250 10.10.10.209 TCP 23120 > 50169 [SYN, ACK] Seq=0 Ack=1 Win=32768 Len=0 MSS=1260 WS=0 67 29.227032 10.10.10.209 10.10.10.250 TCP 50169 > 23120 [ACK] Seq=1 Ack=1 Win=66560 Len=0 68 29.238063 10.10.10.209 10.10.10.250 GIOP GIOP 1.1 Request s=326 id=5 (two-way): op=logon 69 29.291765 10.10.10.250 10.10.10.209 GIOP GIOP 1.1 Reply s=420 id=5: No Exception 70 29.301395 10.10.10.209 10.10.10.250 GIOP GIOP 1.1 Request s=369 id=6 (two-way): op=registerEventStat 71 29.348275 10.10.10.250 10.10.10.209 GIOP GIOP 1.1 Reply s=60 id=6: No Exception 72 29.405250 10.10.10.209 10.10.10.250 TCP 50170 > telnet [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=8 73 29.446055 10.10.10.250 10.10.10.209 TCP telnet > 50170 [SYN, ACK] Seq=0 Ack=1 Win=32768 Len=0 MSS=1260 WS=0 74 29.446128 10.10.10.209 10.10.10.250 TCP 50170 > telnet [ACK] Seq=1 Ack=1 Win=66560 Len=0 75 29.452021 10.10.10.209 10.10.10.250 TELNET Telnet Data ... 76 29.483537 10.10.10.250 10.10.10.209 TELNET Telnet Data ... 77 29.483651 10.10.10.209 10.10.10.250 TELNET Telnet Data ... 78 29.523463 10.10.10.250 10.10.10.209 TCP telnet > 50170 [ACK] Seq=4 Ack=5 Win=32768 Len=0 79 29.554954 10.10.10.209 10.10.10.250 TCP 50169 > 23120 [ACK] Seq=720 Ack=505 Win=66048 Len=0 Non-working PC No. Time Source Destination Protocol Info 1 0.000000 10.10.10.209 10.10.10.250 TCP 64161 > 23120 [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=8 2 2.999847 10.10.10.209 10.10.10.250 TCP 64161 > 23120 [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=8 3 4.540773 Cisco_3c:78:00 Cisco-Li_55:87:72 ARP Who has 10.0.0.1? Tell 10.10.10.209 4 4.540843 Cisco-Li_55:87:72 Cisco_3c:78:00 ARP 10.0.0.1 is at 00:1a:70:55:87:72 5 8.992284 10.10.10.209 10.10.10.250 TCP 64161 > 23120 [SYN] Seq=0 Win=8192 Len=0 MSS=1260

    Read the article

  • Delphi7 - How can i copy a file that is being written to

    - by Simon
    I have an application that logs information to a daily text file every second on a master PC. A Slave PC on the network using the same application would like to copy this text file to its local drive. I can see there is going to be file access issues. These files should be no larger than 30-40MB each. the network will be 100MB ethernet. I can see there is potential for the copying process to take longer than 1 second meaning the logging PC will need to open the file for writing while it is being read. What is the best method for the file writing(logging) and file copying procedures? I know there is the standard Windows CopyFile() procedure, however this has given me file access problems. There is also TFileStream using the fmShareDenyNone flag, but this also very occasionally gives me an access problem too (like 1 per week). What is this the best way of accomplishing this task? My current File Logging: procedure FSWriteline(Filename,Header,s : String); var LogFile : TFileStream; line : String; begin if not FileExists(filename) then begin LogFile := TFileStream.Create(FileName, fmCreate or fmShareDenyNone); try LogFile.Seek(0,soFromEnd); line := Header + #13#10; LogFile.Write(line[1],Length(line)); line := s + #13#10; LogFile.Write(line[1],Length(line)); finally logfile.Free; end; end else begin line := s + #13#10; Logfile:=tfilestream.Create(Filename,fmOpenWrite or fmShareDenyNone); try logfile.Seek(0,soFromEnd); Logfile.Write(line[1], length(line)); finally Logfile.free; end; end; end; My file copy procedure: procedure DoCopy(infile, Outfile : String); begin ForceDirectories(ExtractFilePath(outfile)); //ensure folder exists if FileAge(inFile) = FileAge(OutFile) then Exit; //they are the same modified time try { Open existing destination } fo := TFileStream.Create(Outfile, fmOpenReadWrite or fmShareDenyNone); fo.Position := 0; except { otherwise Create destination } fo := TFileStream.Create(OutFile, fmCreate or fmShareDenyNone); end; try { open source } fi := TFileStream.Create(InFile, fmOpenRead or fmShareDenyNone); try cnt:= 0; fi.Position := cnt; max := fi.Size; {start copying } Repeat dod := BLOCKSIZE; // Block size if cnt+dod>max then dod := max-cnt; if dod>0 then did := fo.CopyFrom(fi, dod); cnt:=cnt+did; Percent := Round(Cnt/Max*100); until (dod=0) finally fi.free; end; finally fo.free; end; end;

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • Performance issues with jms and spring integration. What is wrong with the following configuration?

    - by user358448
    I have a jms producer, which generates many messages per second, which are sent to amq persistent queue and are consumed by single consumer, which needs to process them sequentially. But it seems that the producer is much faster than the consumer and i am having performance and memory problems. Messages are fetched very very slowly and the consuming seems to happen on intervals (the consumer "asks" for messages in polling fashion, which is strange?!) Basically everything happens with spring integration. Here is the configuration at the producer side. First stake messages come in stakesInMemoryChannel, from there, they are filtered throw the filteredStakesChannel and from there they are going into the jms queue (using executor so the sending will happen in separate thread) <bean id="stakesQueue" class="org.apache.activemq.command.ActiveMQQueue"> <constructor-arg name="name" value="${jms.stakes.queue.name}" /> </bean> <int:channel id="stakesInMemoryChannel" /> <int:channel id="filteredStakesChannel" > <int:dispatcher task-executor="taskExecutor"/> </int:channel> <bean id="stakeFilterService" class="cayetano.games.stake.StakeFilterService"/> <int:filter input-channel="stakesInMemoryChannel" output-channel="filteredStakesChannel" throw-exception-on-rejection="false" expression="true"/> <jms:outbound-channel-adapter channel="filteredStakesChannel" destination="stakesQueue" delivery-persistent="true" explicit-qos-enabled="true" /> <task:executor id="taskExecutor" pool-size="100" /> The other application is consuming the messages like this... The messages come in stakesInputChannel from the jms stakesQueue, after that they are routed to 2 separate channels, one persists the message and the other do some other stuff, lets call it "processing". <bean id="stakesQueue" class="org.apache.activemq.command.ActiveMQQueue"> <constructor-arg name="name" value="${jms.stakes.queue.name}" /> </bean> <jms:message-driven-channel-adapter channel="stakesInputChannel" destination="stakesQueue" acknowledge="auto" concurrent-consumers="1" max-concurrent-consumers="1" /> <int:publish-subscribe-channel id="stakesInputChannel" /> <int:channel id="persistStakesChannel" /> <int:channel id="processStakesChannel" /> <int:recipient-list-router id="customRouter" input-channel="stakesInputChannel" timeout="3000" ignore-send-failures="true" apply-sequence="true" > <int:recipient channel="persistStakesChannel"/> <int:recipient channel="processStakesChannel"/> </int:recipient-list-router> <bean id="prefetchPolicy" class="org.apache.activemq.ActiveMQPrefetchPolicy"> <property name="queuePrefetch" value="${jms.broker.prefetch.policy}" /> </bean> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <property name="targetConnectionFactory"> <bean class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL" value="${jms.broker.url}" /> <property name="prefetchPolicy" ref="prefetchPolicy" /> <property name="optimizeAcknowledge" value="true" /> <property name="useAsyncSend" value="true" /> </bean> </property> <property name="sessionCacheSize" value="10"/> <property name="cacheProducers" value="false"/> </bean>

    Read the article

  • SDL2 sprite batching and texture atlases

    - by jms
    I have been programming a 2D game in C++, using the SDL2 graphics API for rendering. My game concept currently features effects that could result in even tens of thousands of sprites being drawn simultaneously to the screen. I'd like to know what can be done for increasing rendering efficiency if the need arises, preferably using the SDL2 API only. I have previously given a quick look at OpenGL-based 2D rendering, and noticed that SDL2 lacks a command like int SDL_RenderCopyMulti(SDL_Renderer* renderer, SDL_Texture* texture, const SDL_Rect* srcrects, SDL_Rect* dstrects, int count) Which would permit SDL to benefit from two common techniques used for efficient 2D graphics: Texture batching: Sorting sprites by the texture used, and then simultaneously rendering as many sprites that use the same texture as possible, changing only the source area on the texture and the destination area on the render target between sprites. This allows the encapsulation of the whole operation in a single GPU command, reducing the overhead drastically from multiple distinct calls. Texture atlases: Instead of creating one texture for each frame of each animation of each sprite, combining multiple animations and even multiple sprites into a single large texture. This lessens the impact of changing the current texture when switching between sprites, as the correct texture is often ready to be used from the previous draw call. Furthemore the GPU is optimized for handling large textures, in contrast to the many tiny textures typically used for sprites. My question: Would SDL2 still get somewhat faster from any rudimentary sprite sorting or from combining multiple images into one texture thanks to automatic video driver optimizations? If I will encounter performance issues related to 2D rendering in the future, will I be forced to switch to OpenGL for lower level control over the GPU? Edit: Are there any plans to include such functionality in the near future?

    Read the article

  • Rendering shadow sprites in cocos2d-x

    - by lukeluke
    I am writing a 2D game with cocos2d-x. I want to put a "shadow" sprite on a background sprite using the equation: MAX(0, Cd*1 - Cs*S) where Cd is the destination color (that is, a background pixel), Cs is the source color (the shadow pixel) , S is the scale factor (between 0 and 1). The MAX() function is used to avoid negative results. This is a lighting effect: when the shadow sprite pixel is 0, there is no effect on the background pixel, otherwise, the background pixel becomes darker. Now, the only way that comes to my mind is to change the blending equation to GL_FUNC_SUBTRACT, but it doesn't compile with cocos2d-x (can't found it)... I would subclass the CCSprite class in order to implement the draw() method in order to change, when needed, the blending equation, call the original draw() method and restore the blending equation to its previous state at the end of the method. So my questions are two: how to use glBlendEquation() with cocos2d-x? Keep in mind that i am writing a game for iphone/android/windows. are shadows handled this way in 2D games? Thx

    Read the article

  • defunct dbus-daemon zombie freezes login for 30 seconds

    - by oldenburgb
    I'm running Ubuntu 12.04.2 LTS (Precise Pangolin) and noticed a 30 second delay whenever i log into my server via ssh or perform any kind of login via sudo on that machine. I can provoke immediate execution by killing the defunct dbus-daemon showing up during the delay: output of ps fax |grep dbus 19222 ? Ss 0:00 dbus-daemon --system --fork --activation=upstart 19752 ? Z 0:00 \_ [dbus-daemon] <defunct> taping into the dbus using dbus-monitor --system i'm getting: signal sender=org.freedesktop.DBus -> dest=(null destination) serial=7 path=/org/freedesktop/DBus; interface=org.freedesktop.DBus; member=NameOwnerChanged string ":1.4" string "" string ":1.4" each login. Stopping the dbus service eliminates this problem but probably causes many other... I'm not running xorg on the machine but the packages are present for X11 forwarding capabilities. I've ruled out the common motd script delay and ssh "UseDNS no" fixes one finds when looking up login delay issues. Many thanks in advance for any help with this, it's been driving me crazy ;-)

    Read the article

  • Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

    - by Bakhtiyor
    I have mailserver configure using dovecot+postfix+mysql and it was runnig fine in the server(Ubuntu Server). But during last week it stopped working correctly. It doesn't send email. When I try to telnet localhost smtp I'm connecting successfully but when I do mail from:<[email protected]> and hit Enter it hangs on, nothing happen. Having reviewed /var/log/mail.log file I've found out that probably(99%) the problem is on postfix when it is trying to connect to MySQL server. If you see the log file given below you can see that it says Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). Nov 14 21:54:36 ns1 dovecot: dovecot: Killed with signal 15 (by pid=7731 uid=0 code=kill) Nov 14 21:54:36 ns1 dovecot: Dovecot v1.2.9 starting up (core dumps disabled) Nov 14 21:54:36 ns1 dovecot: auth-worker(default): mysql: Connected to localhost (mailserver) Nov 14 21:54:44 ns1 postfix/postfix-script[7753]: refreshing the Postfix mail system Nov 14 21:54:44 ns1 postfix/master[1670]: reload -- version 2.7.0, configuration /etc/postfix Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: fatal: mysql:/etc/postfix/mysql-virtual-alias-maps.cf(0,lock|fold_fix): table lookup problem Nov 14 21:54:53 ns1 postfix/master[1670]: warning: process /usr/lib/postfix/trivial-rewrite pid 7759 exit status 1 Nov 14 21:54:53 ns1 postfix/cleanup[7397]: warning: problem talking to service rewrite: Connection reset by peer Nov 14 21:54:53 ns1 postfix/master[1670]: warning: /usr/lib/postfix/trivial-rewrite: bad command startup -- throttling Nov 14 21:54:53 ns1 postfix/smtpd[7071]: warning: problem talking to service rewrite: Success I tried netstat -ln | grep mysql and it returns unix 2 [ ACC ] STREAM LISTENING 5817 /var/run/mysqld/mysqld.sock. The content of /etc/postfix/mysql-virtual-alias-maps.cf file is here: user = stevejobs password = apple hosts = localhost dbname = mailserver query = SELECT destination FROM virtual_aliases WHERE source='%s' Here I tried to change hosts = 127.0.0.1 but it says warning: connect to mysql server 127.0.0.1: Can't connect to MySQL server on '127.0.0.1' (110) So, I am lost and don't know where else to change in order to solve the problem. Any help would be appreciated highly. Thank you.

    Read the article

  • Workshop in Holland - and open questions

    - by Mike Dietrich
    Thanks to everybody visiting yesterday the Upgrade Workshop in Maarsen. I had lots of fun - and I hope you'd enjoy it, too :-) The slides, as always, can be downloaded from: http://apex.oracle.com/folien Use the Schluesselwort/Keyword: upgrade112 And thanks to all those of you sending feedback regarding "traget/destination" (will change it in the slides) and other topics such as Enterprise Manager Grid Control 11g. Enterprise Manager 11g will be launched on 22-APR-2010 - and you can join the event live if you will be accidentialy in New York:http://www.oracle.com/enterprisemanager11g/index.html Thanks for this hint!!! Regarding the open questions: Will there be PSUs available for Intel Solaris? PSUs will be made available on nearly all platforms including Intel Solaris. Please see Note:882604.1 for platform information and Note:854428.1 for direct links to the PSU download location. Is COMMIT_WRITE=NOWAIT the default in patch set 10.2.0.4? I tried to verify this and neither couldn't find a bug entry nor a documentation saying the 10.2.0.4 has a different default setting (default behaviour is WAIT). Checked it in my 10.2.0.4 instances as well and there it is set to WAIT. If this parameter is not explicitly specified, then database commit behavior defaults to writing commit records to disk before control is returned to the client. If only IMMEDIATE or BATCH is specified, but not WAIT or NOWAIT, then WAIT mode is assumed. If only WAIT or NOWAIT is specified, but not IMMEDIATE or BATCH, then IMMEDIATE mode is assumed Please feedback to me if you have different experiences. Service Request escalation by telephone? Thanks for this update - I didn't realize that ;-) Now I know why it hasn't helped last month when I've updated an SR ... here's the official information on that: Note:199389.1 - Note has been updated on 24-FEB-2010. See the telephone number to Oracle support to request an escalation here: http://www.oracle.com/support/contact.html

    Read the article

  • Can´t verify my site on Google (error 403 Forbidden). I have other sites in the same host with no problems whatsoever

    - by Rosamunda Rosamunda
    I can´t verify my site on Google. I´ve done this several times for several sites, all inside the same host. I´ve tried the HTML tag method, HTML upload, Domain Name provider (I canp´t find the options that Google tell me that I should activate...), and Google Analytics. I always get this response: Verification failed for http://www.mysite.com/ using the Google Analytics method (1 minute ago). Your verification file returns a status of 403 (Forbidden) instead of 200 (OK). I´ve checked the server headers, and I get this result: REQUESTING: http://www.mysite.com GET / HTTP/1.1 Connection: Keep-Alive Keep-Alive: 300 Accept:/ Host: www.mysite.com Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0) SERVER RESPONSE: HTTP/1.1 403 Forbidden Date: Wed, 19 Sep 2012 03:25:22 GMT Server: Apache/2.2.19 (Unix) mod_ssl/2.2.19 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 PHP/5.2.17 Connection: close Content-Type: text/html; charset=iso-8859-1 Final Destination Page (It shows my actual homepage). What can I do? The hosting is the very same as in my other sites, where I didn´t have any issue at all! Thanks for your help! Note: As I have a Drupal 7 site, I´ve tried a "Drupal solution" first, but haven´t found any that solved this issue... How can it be forbidden when I can access the link perfectly ok? Is there any solution to this? Thanks!

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • cannot delete IPv6 default gateway

    - by NulledPointer
    The commands below should be pretty self-explanatory. Please note that the route for which i get failure is obtained by RA and has very less expiry ( e Flag in UDAe). @vm:~$ ip -6 route 2001:4860:4001:800::1002 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1003 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1005 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:803::100e via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 fd00:ffff:ffff:fff1::/64 dev eth1 proto kernel metric 256 expires 2592300sec fe80::/64 dev eth1 proto kernel metric 256 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto kernel metric 1024 expires 1776sec @vm:~$ @vm:~$ @vm:~$ @vm:~$ sudo route -6 delete default gw fe80::20c:29ff:fe87:f9e7 @vm:~$ ip -6 route 2001:4860:4001:800::1002 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1003 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1005 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:803::100e via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 fd00:ffff:ffff:fff1::/64 dev eth1 proto kernel metric 256 expires 2592279sec fe80::/64 dev eth1 proto kernel metric 256 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto kernel metric 1024 expires 1755sec @vm:~$ @vm:~$ @vm:~$ sudo route -6 delete ::/0 gw fe80::20c:29ff:fe87:f9e7 dev eth1 SIOCDELRT: No such process @vm:~$ @vm:~$ @vm:~$ route -n6 Kernel IPv6 routing table Destination Next Hop Flag Met Ref Use If 2001:4860:4001:800::1002/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:800::1003/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:800::1005/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:803::100e/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 fd00:ffff:ffff:fff1::/64 :: UAe 256 0 0 eth1 fe80::/64 :: U 256 0 0 eth1 ::/0 fe80::20c:29ff:fe87:f9e7 UGDAe 1024 0 0 eth1 ::/0 :: !n -1 1 349 lo ::1/128 :: Un 0 1 3 lo fd00:ffff:ffff:fff1:a00:27ff:fe7f:7245/128 :: Un 0 1 0 lo fd00:ffff:ffff:fff1:fce8:ce07:b9ea:389f/128 :: Un 0 1 0 lo fe80::a00:27ff:fe7f:7245/128 :: Un 0 1 0 lo ff00::/8 :: U 256 0 0 eth1 ::/0 :: !n -1 1 349 lo @vm:~$ UPDATE: Another question is whats the use of link local address as the default route?

    Read the article

  • unable to format usb 1204 [daemon inhibited ]

    - by santosamaru
    i try to format my usb 1st time its work all data gone but i can't save any file at this usb . then i try to check is it working or broken here the report santos@santos:~$ sudo badblocks -v /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: Checking blocks 0 to 7824383 Checking for bad blocks (read-only test): 0.00% done, 0:00 elapsed. (0/0/0 errdone Pass completed, 0 bad blocks found. (0/0/0 errors) santos@santos:~$ sudo badblocks -v -w /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: /dev/sdb is apparently in use by the system; it's not safe to run badblocks! santos@santos:~$ how to format and fix this issues? i have read this link Formatting Pen Drive causes 'Daemon Is Inhibited' Error and it said like this when i try to move any items from desktop " the destination is read only also in this case i use google and find this http://ubuntuforums.org/showthread.php?t=1955353 article as same its not helped following user13509 suggestion ..

    Read the article

  • Adding dynamic business logic/business process checks to a system

    - by Jordan Reiter
    I'm wondering if there is a good extant pattern (language here is Python/Django but also interested on the more abstract level) for creating a business logic layer that can be created without coding. For example, suppose that a house rental should only be available during a specific time. A coder might create the following class: from bizlogic import rules, LogicRule from orders.models import Order class BeachHouseAvailable(LogicRule): def check(self, reservation): house = reservation.house_reserved if not (house.earliest_available < reservation.starts < house.latest_available ) raise RuleViolationWhen("Beach house is available only between %s and %s" % (house.earliest_available, house.latest_available)) return True rules.add(Order, BeachHouseAvailable, name="BeachHouse Available") This is fine, but I don't want to have to code something like this each time a new rule is needed. I'd like to create something dynamic, ideally something that can be stored in a database. The thing is, it would have to be flexible enough to encompass a wide variety of rules: avoiding duplicates/overlaps (to continue the example "You already have a reservation for this time/location") logic rules ("You can't rent a house to yourself", "This house is in a different place from your chosen destination") sanity tests ("You've set a rental price that's 10x the normal rate. Are you sure this is the right price?" Things like that. Before I recreate the wheel, I'm wondering if there are already methods out there for doing something like this.

    Read the article

  • Problems Using CloudFlare On Blogger

    - by the_archer
    Here's the situation. I got a TLD for my blogger blog and set it up using the instructions from blogger. Blogger asks to: Add two CNAME records. For the first CNAME, where it says Name, Label or Host enter "www" and where it says Destination, Target orPoints To enter "ghs.google.com" . For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." I did that on my domain host, and everything was working smoothly. Here's the things that happened: Typing myblog.blogspot.com in the address bar brought me to my new address www.mynewaddress.tld Typing my newaddress.tld brings me to www.mynewaddress.tld Now, I went through the instruction to setup CloudFlare and did everything as required. I saw that CloudFlare is active and working on my TLD www.mynewaddress.tld, however, when I am typing the blogspot address, i.e. myblog.blogspot.com, it's showing a notice that the blog is not hosted on blogger and that I should click "yes" to get redirected to the new website. However, the blog is still on blogger. I think the problem might be with this particular CNAME record Google asks to create, which I did not find imported to the CloudFlare nameservers: For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." So I create that CNAME and added it to the CloudFlare panel. My question is - is that what will help Google determine that my blog is still hosted on Blogger? If so, should I turn off CloudFlare for that particular CNAME record or turn it on? Any help is very much appreciated :)

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >