Search Results

Search found 13228 results on 530 pages for 'covering index'.

Page 502/530 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • How do I center my navigation bar and background?

    - by user2892958
    nav-wrap { background:url(nav-bg-blue.png) no-repeat top center; height:39px; padding-top:3px; } .no-header-page #nav-wrap { background:url(nav-bg-nobanner-blue.png) no-repeat top center; height:43px; padding-top:4px; margin-bottom:30px; } #nav-wrap .container { clear: both; overflow: hidden; position:center; width:100%; } #nav-wrap .container ul { list-style: none; float: center; } #nav-wrap .container ul li { list-style: none; float: left; background:url(nav-right-last.png) no-repeat top right; padding-right:20px; margin-left:-10px; position:auto; } #nav-wrap .container ul span li { background:url(nav-right-last.png) no-repeat top right; } #nav-wrap .container ul li a { float: center; display: block; font-family: 'News Cycle', sans-serif; color: #fff; text-decoration: none; padding: 5px 10px 8px 20px; border: 0; outline: 0; list-style-type: none; font-size: 14px; text-transform:uppercase; letter-spacing:2px; background:url(nav-left-first.png) no-repeat top left; line-height:25px; text-shadow:0 -1px 2px rgba(0,0,0,0.3); } #nav-wrap .container ul li#active, #nav-wrap .container ul li:hover{ background:url(nav-hover-right-last-brown-red.png) no-repeat topright; z-index:1; } #nav-wrap .container ul li:hover a, #nav-wrap .container ul li#active a, #nav-wrap .container ul li a:hover { border: 0; background:url(nav-hover-left-brown-red.png) no-repeat top left; } .wsite-nav-0 { margin-left:0 !important`` }

    Read the article

  • Trying to pass variable from 1 Function to Another to Put in Array within same Model

    - by Jason Shultz
    Ok, that sounds really confusing. What I’m trying to do is this. I’ve got a function that uploads/resizes photos to the server. It stores the paths in the DB. I need to attach the id of the business to the row of photos. Here’s what I have so far: function get_bus_id() { $userid = $this->tank_auth->get_user_id(); $this->db->select('b.id'); $this->db->from ('business AS b'); $this->db->where ('b.userid', $userid); $query = $this->db->get(); if ($query->num_rows() > 0) { return $query->result_array(); } That get’s the id of the business. Then, I have my upload function which is below: /* Uploads images to the site and adds to the database. */ function do_upload() { $config = array( 'allowed_types' => 'jpg|jpeg|gif|png', 'upload_path' => $this->gallery_path, 'max_size' => 2000 ); $this->load->library('upload', $config); $this->upload->do_upload(); $image_data = $this->upload->data(); $config = array( 'source_image' => $image_data['full_path'], 'new_image' => $this->gallery_path . '/thumbs', 'maintain_ratio' => true, 'width' => 150, 'height' => 100 ); $this->load->library('image_lib', $config); $this->image_lib->resize(); $upload = $this->upload->data(); $bus_id = $this->get_bus_id(); $data = array( 'userid' => $this->tank_auth->get_user_id(), 'thumb' => $this->gallery_path . '/thumbs/' . $upload['file_name'], 'fullsize' => $upload['full_path'], 'busid'=> $bus_id['query'], ); echo var_dump($bus_id); $this->db->insert('photos', $data); } The problem I’m getting is the following: A PHP Error was encountered Severity: Notice Message: Undefined index: id Filename: models/gallery_model.php Line Number: 48 I’ve tried all sorts of ways to get the value over, but my limited knowledge keeps getting in the way. Any help would be really appreciated.

    Read the article

  • My horizontal drop down with CSS, sub navigation menu items are being displayed on top of each other

    - by Rigo Collazo
    my sub navigation menu items are being displayed on top of each other here is the code: /* NAVIGATION */ .nav-bar {width: 100%; height: 80px; background-image:url(../images/bg-menu80.jpg);} .nav-hold {overflow: hidden;} .nav-list {float: right;} .nav-list li {float: left; width: auto; position: relative;} .nav-list li a {text-decoration: none; display:block; padding: 30px 7px 20px 7px; color: #f9f9f9; font-size: .9em; font-weight: bold;} .nav-list li ul {display: none;} .nav-list li a:hover {text-decoration: none; display: block; padding: 30px 7px 20px 7px; color: #000; font-size: .9em; font-weight: bold; background-color: #e7e4e4;} .nav-list li a:hover li{display: block; position: absolute; margin: 0; padding: 0;} .nav-list li a:hover li{float: left;} .nav-list li:hover li a{ background-color: #333; border-bottom: 1px solid #fff; color: #FFF;} <ul class="nav-list" id="navigation"><!--Menu list--> <li><a href="index.html">Home</a></li> <li><a href="about.html">About Us</a></li> <li> <ul><a href="members.html">Members</a> <li><a href="board.html">Board of Directors</a></li> <li><a href="committee.html">Committee</a></li> </ul></li> <li><a href="join.html">Join Us</a></li> <li><a href="events.html">Events</a></li> <li><a href="rules.html">Rules &amp; Guidelines</a></li> <li><a href="archive.html">Archive</a></li> <li><a href="contact.html">Contact Us</a></li> <li><a href="#">Login</a></li> </ul><!--ENDS Menu list-->

    Read the article

  • How to calculate where bullet hits

    - by lkjoel
    I have been trying to write an FPS in C/X11/OpenGL, but the issue that I have encountered is with calculating where the bullet hits. I have used a horrible technique, and it only sometimes works: pos size, p; size.x = 0.1; size.z = 0.1; // Since the game is technically top-down (but in a 3D perspective) // Positions are in X/Z, no Y float f; // Counter float d = FIRE_MAX + 1 /* Shortest Distance */, d1 /* Distance being calculated */; x = 0; // Index of object to hit for (f = 0.0; f < FIRE_MAX; f += .01) { // Go forwards p.x = player->pos.x + f * sin(toRadians(player->rot.x)); p.z = player->pos.z - f * cos(toRadians(player->rot.x)); // Get all objects that collide with the current position of the bullet short* objs = _colDetectGetObjects(p, size, objects); for (i = 0; i < MAX_OBJECTS; i++) { if (objs[i] == -1) { continue; } // Check the distance between the object and the player d1 = sqrt( pow((objects[i].pos.x - player->pos.x), 2) + pow((objects[i].pos.z - player->pos.z), 2)); // If it's closer, set it as the object to hit if (d1 < d) { x = i; d = d1; } } // If there was an object, hit it if (x > 0) { hit(&objects[x], FIRE_DAMAGE, explosions, currtime); break; } } It just works by making a for-loop and calculating any objects that might collide with where the bullet currently is. This, of course, is very slow, and sometimes doesn't even work. What would be the preferred way to calculate where the bullet hits? I have thought of making a line and seeing if any objects collide with that line, but I have no idea how to do that kind of collision detection. EDIT: I guess my question is this: How do I calculate the nearest object colliding in a line (that might not be a straight 45/90 degree angle)? Or are there any simpler methods of calculating where the bullet hits? The bullet is sort of like a laser, in the sense that gravity does not affect it (writing an old-school game, so I don't want it to be too realistic)

    Read the article

  • click handler after using Ajax

    - by Tom
    I have a site that plays a stream. I perform an AJAX call to the server once a person presses a button. <input type="submit" class="play" data-type="<?php echo $result_cameras[$i]["camera_type"]; ?>" data-hash="<?php echo $result_cameras[$i]["camera_hash"]; ?>" value="<?php echo $result_cameras[$i]["camera_name"]; ?>"> This prints out a bunch of buttons that the user can select. This is processed by the following code: <script> $(document).ready(function(){ $(".play").click(function(){ var camerahash = $(this).data('hash'); var cameratype = $(this).data('type'); function doAjax(){ $.ajax({ url: 'index.php?option=streaming&task=playstream&id_hash=<?php echo $id_hash; ?>&camera_hash='+camerahash+'&format=raw', success: function(data) { if (data == 'Initializing...please wait') { $('#quote p').html(data); setTimeout(doAjax, 2000); } else { if (cameratype == "WEBCAM" && data == 'Stream is ready...') { $('#quote p').html(data); window.location = 'rtsp://<?php echo DEVSTREAMWEB; ?>/<?php echo $session_id;?>/'+camerahash; } else if (cameratype == "AXIS" && data == 'Stream is ready...') { $('#quote p').html(data); window.location = 'rtsp://<?php echo DEVSTREAMIP; ?>/<?php echo $session_id;?>/'+camerahash; } else { $('#quote p').html(data); } } } }); } doAjax(); }); }); </script> The server returns the messages such as Stream is ready.... My problem is that everything is working great except on additional button clicks. Specifically when they get success (video plays) and they exit back out, they don't get any other messages if they click another button. It is as if the click event is not triggered. Do I need to be doing something to the click handler to respond?

    Read the article

  • C# recursive programming with lists

    - by David Torrey
    I am working on a program where each item can hold an array of items (i'm making a menu, which has a tree-like structure) currently i have the items as a list, instead of an array, but I don't feel like I'm using it to its full potential to simplify code. I chose a list over a standard array because the interface (.add, .remove, etc...) makes a lot of sense. I have code to search through the structure and return the path of the name (i.e. Item.subitem.subsubitem.subsubsubitem). Below is my code: public class Item { //public Item[] subitem; <-- Array of Items public List<Item> subitem; // <-- List of Items public Color itemColor = Color.FromArgb(50,50,200); public Rectangle itemSize = new Rectangle(0,0,64,64); public Bitmap itemBitmap = null; public string itemName; public string LocateItem(string searchName) { string tItemName = null; //if the item name matches the search parameter, send it up) if (itemName == searchName) { return itemName; } if (subitem != null) { //spiral down a level foreach (Item tSearchItem in subitem) { tItemName = tSearchItem.LocateItem(searchName); if (tItemName != null) break; //exit for if item was found } } //do name logic (use index numbers) //if LocateItem of the subitems returned nothing and the current item is not a match, return null (not found) if (tItemName == null && itemName != searchName) { return null; } //if it's not the item being searched for and the search item was found, change the string and return it up if (tItemName != null && itemName != searchName) { tItemName.Insert(0, itemName + "."); //insert the parent name on the left --> TopItem.SubItem.SubSubItem.SubSubSubItem return tItemName; } //default not found return null; } } My question is if there is an easier way to do this with lists? I've been going back and forth in my head as to whether I should use lists or just an array. The only reason I have a list is so that I don't have to make code to resize the array each time I add or remove an item.

    Read the article

  • Is it possible to make this Flex/Flash application safe?

    - by Frank
    I'm back with another Flex/Flash security question. I've already received some help from the community on this topic, but I'm still not quite sure this is the best way to do. Here's the thing. A flex web app, a lot of users (1000+), custom configuration of the application depending of the user group. Can I make this thing safe... or safer. For the moment, when a user comes to the application, there is only one configuration possible, but for the next version we've implented a multi-configuration protocol, this way : 1. The user connect to Default.aspx, server code process the windows credentials (whe are on intranet) and give the correct xml configuration file. 2. The flex app loads with the xml conf file as a flashvar and then the app 'builds' itself with the content of the xml file. As we know, since this is a flex application the swf is downloaded on the client computer and the xml file too. If more than one user connects to the app, from the same computer, the can possibly see the other xml file in the windows temp folder. The current directory of the application looks that way : Web site |-> default.aspx |-> index.swf |-> configAdmin.xml |-> configUserType1.xml |-> configUserType2.xml |-> com |-> a lot of swf and xml files I was first thinking making another directory (without read access for the client) containing all the configurations xml files, picking the right one, copying it to the client and deleting it afterwards. But it seems like I must let know the user know when downloading/deleting content on it's computer... I'm running out of ideas, so I hope you have some great ones. It's there are some design flaws (in the way the app is build, not in Flash :p) please share. I'm always looking forward to improve. Thanks Update : In browser Flash/Flex (without AIR that is) doesn't allow deleting file localy silently (on the client computer, where the application is). It's also not yet possible to get session data.

    Read the article

  • When i add a bitmap to an array list the last element is duplicated in previous indexes

    - by saxofone2
    I'm trying to implement a personal way of undo/redo in a finger paint-like app. I have in synthesis three objects: the Main class (named ScorePadActivity), the relative Main Layout (with buttons, menus, etc, as well as a View object where I create my drawings), and a third object named ArrayList where i'm writing the undo/redo code. The problem is, when I press the undo button nothing happens, but if I draw anything again "one-time" and press undo, the screen is updated. If I draw many times, to see any changes happen on screen I have to press the undo button the same number of times I have drawn. Seems like (as in title) when I add a bitmap to the array list the last element is duplicated in previous indexes, and for some strange reason, everytime I press the Undo Button, the system is ok for one time, but starts to duplicate until the next undo. The index increase is verified with a series of System.out.println inserted in code. Now when I draw something on screen, the array list is updated with the code inserted after the invocation of touchup(); method in motionEvent touch_up(); } this.arrayClass.incrementArray(mBitmap); mPath.rewind(); invalidate(); and in ArrayList activity; public void incrementArray(Bitmap mBitmap) { this._mBitmap=mBitmap; _size=undoArray.size(); undoArray.add(_size, _mBitmap); } (All Logs removed for clear reading) The undo button in ScorePadActivity calls the undo method in View activity: Button undobtn= (Button)findViewById(R.id.undo); undobtn.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { mView.undo(); } }); in View activity: public void undo() { this.mBitmap= arrayClass.undo(); mCanvas = new Canvas(mBitmap); mPath.rewind(); invalidate(); } that calls the relative undo method in ArrayList activity: public Bitmap undo() { // TODO Auto-generated method stub _size=undoArray.size(); if (_size>1) { undoArray.remove(_size-1); _size=undoArray.size(); _mBitmap = ((Bitmap) undoArray.get(_size-1)).copy(Bitmap.Config.ARGB_8888,true); } return _mBitmap; } return mBitmap and invalidate: Due to my bad English I have made a scheme to make the problem more clear: I have tried with HashMap, with a simple array, I have tried to change mPath.rewind(); with reset();, new Path(); etc but nothing. Why? Sorry for the complex answer, i want give you a great thanks in advance. Best regards

    Read the article

  • jquery: find common elements in 2 sets of divs

    - by tsiger
    For a markup like this: <div id="set1"> <div id="100">a div</div> <div id="101">another div</div> <div id="102">another div 2</div> <div id="120">same div</div> </div> <div id="set2"> <div id="105">a different div> <div id="101">another div</div> <div id="110">more divs</div> <div id="120">same div</div> </div> As you can see both #set1 and #set2 contain 2 divs with the same id (101, 120). Is it possible somehow with jQuery to find the common elements and add a class to the divs in #set1 that have the same id with divs in #set2? In other words after the script run the above code would look like this: <div id="set1"> <div id="100">a div</div> <div id="101" class="added">another div</div> <div id="102">another div 2</div> <div id="120" class="added">same div</div> </div> <div id="set2"> <div id="105">a different div> <div id="101">another div</div> <div id="110">more divs</div> <div id="120">same div</div> </div> EDIT playing around with it i did something but i am not sure it can go anywhere. I created an array with the ids in both sets and in Firebug i can see an array with the values var arrEl = []; $('#set1 div, #set2 div').each( function(index) { var id = $(this).attr('id'); arrEl.push(id); //maybe somehow check the array for the values that appear twice, and add the class to the //matching divs? });

    Read the article

  • stringindexoutofbounds with currency converter java program

    - by user1795926
    I am have trouble with a summary not showing up. I am supposed to modify a previous Java assignment by by adding an array of objects. Within the loop, instantiate each individual object. Make sure the user cannot keep adding another Foreign conversion beyond your array size. After the user selects quit from the menu, prompt if the user want to display a summary report. If they select ‘Y’ then, using your array of objects, display the following report: Item Conversion Dollars Amount 1 Japanese Yen 100.00 32,000.00 2 Mexican Peso 400.00 56,000.00 3 Canadian Dollar 100.00 156.00 etc. Number of Conversions = 3 There are no errors when I compile..but when I run the program it is fine until I hit 0 to end the conversion and have it ask if i want to see a summary. This error displays: Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 0 at java.lang.String.charAt(String.java:658) at Lab8.main(Lab8.java:43) my code: import java.util.Scanner; import java.text.DecimalFormat; public class Lab8 { public static void main(String[] args) { final int Max = 10; String a; char summary; int c = 0; Foreign[] Exchange = new Foreign[Max]; Scanner Keyboard = new Scanner(System.in); Foreign.opening(); do { Exchange[c] = new Foreign(); Exchange[c].getchoice(); Exchange[c].dollars(); Exchange[c].amount(); Exchange[c].vertical(); System.out.println("\n" + Exchange[c]); c++; System.out.println("\n" + "Please select 1 through 4, or 0 to quit" + >"\n"); c= Keyboard.nextInt(); } while (c != 0); System.out.print("\nWould you like a summary of your conversions? (Y/N): "); a = Keyboard.nextLine(); summary = a.charAt(0); summary = Character.toUpperCase(summary); if (summary == 'Y') { System.out.println("\nCountry\t\tRate\t\tDollars\t\tAmount"); System.out.println("========\t\t=======\t\t=======\t\t========="); for (int i=0; i < Exchange.length; i++) System.out.println(Exchange[i]); Foreign.counter(); } } } I looked at line 43 and its this line: summary = a.charAt(0); But I am not sure what's wrong with it, can anyone point it out? Thank you.

    Read the article

  • Where can I find "canonical" sample programs that give quick refreshers for any given language? [on hold]

    - by acheong87
    Note to those close-voting this question: I understand this isn't a conventional programming question and I can agree with the reasoning that it's in the subjective domain (like best-of lists). In other ways though I think it's appropriate because, though it's not a "a specific programming problem," nor concerning "a software algorithm", nor (strictly) concerning "software tools commonly used by programmers", I think it is a "practical, answerable [problem that is] unique to the programming profession," and I think it is "based on an actual [problem I] face." I've been wanting this for some time now, because both approaches of (a) Googling for samples as I write every other line of code and (b) just winging it and seeing what errors crop up, distract me from coding efficiently. This note will be removed if the question gains popularity; this question will be deleted otherwise. I spend most of my time developing in C++, PHP, or Javascript, and every once in a while I have to do something in, say, VBA. In those times, it'd be convenient if I could just put up some sample code on a second monitor, something in between a cheat sheet (often too compact; and doesn't resemble anything that could actually compile/run), and a language reference (often too verbose, or segmented; requires extra steps to search or click through an index), so I can just glance at it and recall things, like how to loop through non-empty cells in a column. I think there's a hidden benefit to seeing formed code, that triggers the right spots in our brains to get back into a language we only need to brush up on. Similar in spirit is how http://ideone.com lets you click "Template" in any given language so you can get started without even doing a search. That template alone tells a lot, sometimes! Case-sensitivity, whitespace conventions, identifier conventions, the spelling of certain types, etc. I couldn't find a resource that pulled together such samples, so if there indeed doesn't exist such a repository, I was hoping this question would inspire professionals and experts to contribute links to the most useful sample code they've used for just this purpose: a keep-on-the-side, form-as-well-as-content, compilable/executable, reminder of a language's basic and oft-used features. Personally I am interested in seeing "samplers" for: VBA, Perl, Python, Java, C# (though for some of these autocompleters in Eclipse, Visual Studio, etc. help enough), awk, and sed. I'm tagging c++, php, and javascript because these are languages for which I'd best be able to evaluate whether proffered sample code matches what I had in mind.

    Read the article

  • IIS7 URL Redirect with Regex

    - by andyjv
    I'm preparing for a major overhaul of our shopping cart, which is going to completely change how the urls are structured. For what its worth, this is for Magento 1.7. An example URL would be: {domain}/item/sub-domain/sub-sub-domain-5-16-7-16-/8083770?plpver=98&categid=1027&prodid=8090&origin=keyword and redirect it to {domain}/catalogsearch/result/?q=8083710 My web.config is: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Magento Required" stopProcessing="false"> <match url=".*" ignoreCase="false" /> <conditions> <add input="{URL}" pattern="^/(media|skin|js)/" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Item Redirect" stopProcessing="true"> <match url="^item/([_\-a-zA-Z0-9]+)/([_\-a-zA-Z0-9]+)/([_\-a-zA-Z0-9]+)(\?.*)" /> <action type="Redirect" url="catalogsearch/result/?q={R:3}" appendQueryString="true" redirectType="Permanent" /> <conditions trackAllCaptures="true"> </conditions> </rule> </rules> </rewrite> <httpProtocol allowKeepAlive="false" /> <caching enabled="false" /> <urlCompression doDynamicCompression="true" /> </system.webServer> </configuration> Right now it seems the redirect is completely ignored, even though in the IIS GUI the sample url passes the regex test. Is there a better way to redirect or is there something wrong with my web.config?

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • Java Sorting "queue" list based on DateTime and Z Position (part of school project)

    - by Kuchinawa
    For a school project i have a list of 50k containers that arrive on a boat. These containers need to be sorted in a list in such a way that the earliest departure DateTimes are at the top and the containers above those above them. This list then gets used for a crane that picks them up in order. I started out with 2 Collection.sort() methods: 1st one to get them in the right XYZ order Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { return positionSort(contData1.getLocation(),contData2.getLocation()); } }); Then another one to reorder the dates while keeping the position in mind: Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { int c = contData1.getLeaveDateTimeFrom().compareTo(contData2.getLeaveDateTimeFrom()); int p = positionSort2(contData1.getLocation(), contData2.getLocation()); if(p != 0) c = p; return c; } }); But i never got this method to work.. What i got working now is rather quick and dirty and takes a long time to process (50seconds for all 50k): First a sort on DateTime: Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { return contData1.getLeaveDateTimeFrom().compareTo(contData2.getLeaveDateTimeFrom()); } }); Then a correction function that bumps top containers up: containers = stackCorrection(containers); private static List<ContainerData> stackCorrection(List<ContainerData> sortedContainerList) { for(int i = 0; i < sortedContainerList.size(); i++) { ContainerData current = sortedContainerList.get(i); // 5 = Max Stack (0 index) if(current.getLocation().getZ() < 5) { //Loop through possible containers above current for(int j = 5; j > current.getLocation().getZ(); --j) { //Search for container above for(int k = i + 1; k < sortedContainerList.size(); ++k) if(sortedContainerList.get(k).getLocation().getX() == current.getLocation().getX()) { if(sortedContainerList.get(k).getLocation().getY() == current.getLocation().getY()) { if(sortedContainerList.get(k).getLocation().getZ() == j) { //Found -> move container above current sortedContainerList.add(i, sortedContainerList.remove(k)); k = sortedContainerList.size(); i++; } } } } } } return sortedContainerList; } I would like to implement this in a better/faster way. So any hints are appreciated. :)

    Read the article

  • style a navigation link when a particular div is shown

    - by Matt Meadows
    I have JQuery working to show a particular div when a certain link is clicked. I have managed to apply the effect I'm after with the main navigation bar through id'ing the body tag and using css to style when the id is found. However, i'd like to apply the same effect to the sub navigation when a certain div is present. How the main navigation is styled: HTML: <nav> <ul> <li id="nav-home"><a href="index.html">Home</a></li> <li id="nav-showreel"><a href="showreel.html">Showreel</a></li> <li id="nav-portfolio"><a href="portfolio.html">Portfolio</a></li> <li>Contact</li> </ul> </nav> CSS: body#home li#nav-home, body#portfolio li#nav-portfolio { background: url("Images/Nav_Underline.png") no-repeat; background-position: center bottom; color: white; } (Other links havent been added to styling as those pages are still in development) How the sub navigation is structured: <nav id="portfolioNav"> <ul> <li id="portfolio-compositing"><a id="compositingWork" href="#">Compositing</a></li> <li id="portfolio-animation"><a id="animationWork" href="#">Animation</a></li> <li id="portfolio-motionGfx"><a id="GFXWork" href="#">Motion Graphics</a></li> <li id="portfolio-3D"><a id="3DWork" href="#">3D</a></li> </ul> </nav> As you can see, its similar format to the main navigation, however i've tried the same approach and it doesn't work :( The Javascript that switches the divs on the navigation click: <script type="text/javascript"> $(document).ready(function() { $('#3DWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #Portfolio3D'); }); $('#GFXWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #motionGraphics'); }); $('#compositingWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #PortfolioCompositing'); }); $('#animationWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #PortfolioAnimation'); }); }); </script> JSFiddle for full HTML & CSS : JSFiddle File The effect I'm After:

    Read the article

  • CSS: my side bar overflow the container div's border when I set it's height to 100%

    - by mnml
    Hi, My side bar overflow the container div's border when I set it's height to 100%, I would like to know if there is any way I can have it's height 100% minus some px. Here is the source: <div id="main"> <br /><br /> <div class="content"> <div id="sidecontent"> <h1 id="title">Title</h1> ***** </div> <div id="sidebar"> <div class="sidebox"> **** </div> </div> </div> <div class="bottom"></div> </div> #main { position: relative; background:transparent url('/public/images/main_bg.png') top left repeat-y; padding:37px 37px 37px 37px; margin-left: auto ; margin-right: auto ; width:940px; min-height: 363px; } #main div.top, #main div.bottom { height:70px; width:1015px; position: absolute; left:0px; } #main div.content { padding:0 15px 0 15px; } #sidecontent { width: 675px; } #sidebar { background: #fff url('/public/images/bg_side.png') top right repeat-y; position: absolute; height: 100%; right:34px; top:42px; width: 200px; padding: 10px 10px 0px 40px; z-index:50; } .created_at { color:gray; } .sidebox { margin-bottom: 5px; } #main div.top { top:-70px; background: transparent url(/public/images/main_top.png) bottom no-repeat; } #main div.bottom { bottom:-70px; background: transparent url(/public/images/main_bottom.png) top no-repeat; }

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • how to install ffmpeg in cpanel

    - by Ajay Chthri
    i'm using dedicated server(linux) so i need to install ffmpeg in cpanel so here ffmpeg i found in Main Software Install a Perl Module but i writing script in php so how can i install ffmpeg phpperl when i'am trying to install ffmpeg in perl module i get this response Checking C compiler....C compiler (/usr/bin/cc) OK (cached Tue Jan 17 19:16:31 2012)....Done CPAN fallback is disabled since /var/cpanel/conserve_memory exists, and cpanm is available. Method: Using Perl Expect, Installer: cpanm You have make /usr/bin/make Falling back to HTTP::Tiny 0.009 You have /bin/tar: tar (GNU tar) 1.15.1 You have /usr/bin/unzip You have Cpanel::HttpRequest 2.1 Testing connection speed...(using fast method)...Done Ping:2 (ticks) Testing connection speed to cpan.knowledgematters.net using pureperl...(28800.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.develooper.com using pureperl...(22233.33 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.schatt.com using pureperl...(32750.00 bytes/s)...Done Ping:3 (ticks) Testing connection speed to cpan.mirror.facebook.net using pureperl...(14050.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.mirrors.hoobly.com using pureperl...(5150.00 bytes/s)...Done Five usable mirrors located Ping:0 (ticks) Testing connection speed to 208.109.109.239 using pureperl...(28950.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to 208.82.118.100 using pureperl...(19300.00 bytes/s)...Done Ping:1 (ticks) Testing connection speed to 69.50.192.73 using pureperl...(19300.00 bytes/s)...Done Three usable fallback mirrors located Mirror Check passed for cpan.schatt.com (/index.html) Searching on cpanmetadb ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:0).......(request attempt 1/12)...Using dns cache file /root/.HttpRequest/cpanmetadb.cpanel.net......searching for mirrors (mirror search attempt 1/3)......5 usable mirrors located. (less then expected)......mirror search success......connecting to 208.74.123.82...@208.74.123.82......connected......receiving...100%......request success......Done Searching Video::FFmpeg on cpanmetadb (http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release) ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:1).......(request attempt 1/12)[email protected]%......request success......Done Source: fastest CPAN mirror ... --> Working on Video::FFmpeg Fetching http://cpan.schatt.com//authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz ... Fetching http://cpan.schatt.com/authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz (connected:1).......(request attempt 1/12)...Resolving cpan.schatt.com...(resolve attempt 1/65)......connecting to 66.249.128.125...@66.249.128.125......connected......receiving...25%...50%...75%...100%......request success......Done OK Unpacking Video-FFmpeg-0.47.tar.gz Video-FFmpeg-0.47/ Video-FFmpeg-0.47/Changes Video-FFmpeg-0.47/FFmpeg.xs Video-FFmpeg-0.47/MANIFEST Video-FFmpeg-0.47/META.yml Video-FFmpeg-0.47/Makefile.PL Video-FFmpeg-0.47/README Video-FFmpeg-0.47/lib/ Video-FFmpeg-0.47/lib/Video/ Video-FFmpeg-0.47/lib/Video/FFmpeg/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVFormat.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Audio.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Subtitle.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Video.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream.pm Video-FFmpeg-0.47/lib/Video/FFmpeg.pm Video-FFmpeg-0.47/ppport.h Video-FFmpeg-0.47/t/ Video-FFmpeg-0.47/t/Video-FFmpeg.t Video-FFmpeg-0.47/test Video-FFmpeg-0.47/test.mp4 Video-FFmpeg-0.47/typemap Entering Video-FFmpeg-0.47 Checking configure dependencies from META.yml META.yml not found or unparsable. Fetching META.yml from search.cpan.org Fetching http://search.cpan.org/meta/Video-FFmpeg-0.47/META.yml (connected:1).......(request attempt 1/12)...Resolving search.cpan.org...(resolve attempt 1/65)......connecting to 199.15.176.161...@199.15.176.161......connected......receiving...100%......request success......Done Configuring Video-FFmpeg-0.47 ... Running Makefile.PL Perl v5.10.0 required--this is only v5.8.8, stopped at Makefile.PL line 1. BEGIN failed--compilation aborted at Makefile.PL line 1. N/A ! Configure failed for Video-FFmpeg-0.47. See /home/.cpanm/build.log for details. Perl Expect failed with non-zero exit status: 256 All available perl module install methods have failed guide me how can i install ffmpeg in cPanel Thanks for advance.

    Read the article

  • Trying to update debian not working

    - by Sean
    As root i type this command apt-get update and get these error messages. > Err http://security.debian.org lenny/updates Release.gpg Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/main Translation-en_US Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/contrib Translation-en_US Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/non-free Translation-en_US Could not resolve 'security.debian.org' Err http://www.backports.org lenny-backports Release.gpg Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/main Translation-en_US Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/contrib Translation-en_US Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/non-free Translation-en_US Could not resolve 'www.backports.org' Err http://ftp.us.debian.org lenny Release.gpg Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/main Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/contrib Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/non-free Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://http.us.debian.org stable Release.gpg Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/main Translation-en_US Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/contrib Translation-en_US Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/non-free Translation-en_US Could not resolve 'http.us.debian.org' Reading package lists... Done W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/Release.gpg Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/main/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/contrib/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/non-free/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/Release.gpg Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/main/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/contrib/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/non-free/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/Release.gpg Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/main/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/contrib/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/non-free/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/Release.gpg Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/main/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/contrib/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/non-free/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems This is on a dreamplug linux server. Configured so that my network starts on 192.168.1.2 and my router is port forwarding ssh to 192.168.1.6 to the server.

    Read the article

  • Trouble with dns and debian update

    - by Sean
    I tried to update my debian dreamplug server with the command running as root apt-get update and recieved these errors. Err http://security.debian.org lenny/updates Release.gpg Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/main Translation-en_US Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/contrib Translation-en_US Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/non-free Translation-en_US Could not resolve 'security.debian.org' Err httdp://www.backports.org lenny-backports Releasegpg Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/main Translation-en_US Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/contrib Translation-en_US Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/non-free Translation-en_US Could not resolve 'www.backports.org' Err httdp://ftp.us.debian.org lenny Release.gpg Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/main Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/contrib Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/non-free Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://http.us.debian.org stable Release.gpg Could not resolve 'http.us.debian.org' Err htdtp://http.us.debian.org stable/main Translation-en_US Could not resolve 'http.us.debian.org' Err httdp://http.us.debian.org stable/contrib Translation-en_US Could not resolve 'http.us.debian.org' Err htdtp://http.us.debian.org stable/non-free Translation-en_US Could not resolve 'http.us.debian.org' Reading package lists... Done W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/Release.gpg Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/main/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/contrib/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/non-free/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/Release.gpg Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/main/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/contrib/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/non-free/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/Release.gpg Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/main/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/contrib/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/non-free/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/Release.gpg Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/main/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/contrib/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/non-free/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems I am able to ping ip addresses but not namespaces. Can't seem to figure out the problem. My /etc/resolv.conf file contains nameserver 192.168.1.2 which is my router.

    Read the article

  • Weird networking problem ( Linksys, Windows 7 )

    - by Rohit Nair
    Okay it's a bit tough to figure out where to start from, but here is the basic summary of the issue: During general internet usage, there are times when any attempt to visit a website stalls at "Waiting for somedomain.com". This problem occurs in Firefox, IE and Chrome. No website will load, INCLUDING the router configuration page at 192.168.1.1. Curiously, ping works fine, and other network apps such as MSN Messenger continue to work and I can send and receive messages. Disconnecting and reconnecting to the wireless network seems to fix the problem for a bit, but there are times when it relapses into not loading after every 2-3 http requests. Restarting the router seems to fix the issue, but it can crop up hours or days later. I have a CCNA cert and I know my way around the Windows family of operating systems, so I'm going to list all the things I've tried here. Other computers on the network seem to suffer the same problem, which makes me think it might be a specific problem with something in Win7. The random nature of this issue makes it a bit difficult to confirm, but I can definitely say that I have experienced this on the following systems: Windows 7 64-bit on my desktop Windows Vista 32-bit on my desktop ( the desktop has 2 wireless NICs and the problem existed on both ) Windows Vista 32-bit on my laptop ( both with wireless and wired ) Windows XP SP3 on another laptop ( both wireless and wired ) Using Wireshark to sniff packets seemed to indicate that although HTTP requests were being SENT out, no packets were coming in to respond to the HTTP request. However, other network apps continued to work i.e I would still receive IMs on Windows Live Messenger. Disabling IPV6 had no effect. Updating router firmware to the latest stock firmware by Linksys had no effect. Switching to dd-wrt firmware had no effect. By "no effect" I mean that although the restart required by firmware updates fixed the problem at the time, it still came back. A couple of weeks back, after a LOT of googling and flipping of various options, I figured it might be a case of router slowdown ( http://www.dd-wrt.com/wiki/index.php/Router%5FSlowdown ) caused by the fact that I occasionally run a torrent client. I tried changing the configuration as suggested in that router slowdown link, and restarted the router. However I have not run the torrent client for 12 days now, and yet I still randomly experience this problem. Currently the computer I am using is running Windows 7 64-bit. I would just like to reiterate some of the reasons that I was confused by the issue. Even the router config page at 192.168.1.1 would not load, indicating that it's not a problem with the WAN link, but probably a router issue or a local computer issue. For some reason, disconnecting and reconnecting to the wireless network immediately seems to fix the problem. Updating the router firmware, even switching to open source firmware did nothing. So it seemed to be a computer issue. On the other hand, I have not seen any mass outrage of people having networking problems with Windows 7 and Linksys routers, especially a problem of this sort, and I have tweaked every network setting I could think of. Although HTTP seems to have trouble, ping works fine, DNS lookups work fine, other networking apps work fine. However if I disconnect from Windows Live Messenger and try to reconnect, it fails to reconnect. So although it could receive data over the existing TCP/IP connection, trying to start a new one failed? Does anyone have any further ideas on debugging or fixing this issue? I am reasonably certain there are no viruses or other malicious apps on my network, and I am also reasonably certain that nobody is accessing my router without my consent. Router: Linksys WRT54G2 1.0 running dd-wrt firmware Wireless Card: Alfa AWUS036H OS: Windows 7 64-bit EDIT: I tried switching to a clean wireless channel free from interference, but the problem still persisted. I tried connecting directly with a cable, but the problem still persisted. Signed A very confused and bewildered geek whose knowledge seems to be useless in the face of this frustrating network issue.

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Windows: what is the difference between DEP always on and DEP opt-out with no exceptions?

    - by Peter Mortensen
    What is the difference between DEP always on ("/NoExecute=AlwaysOn" in boot.ini) and DEP opt-out ( "/NoExecute=OptOut" in boot.ini) with no exceptions? "no exceptions" = empty list of programs for which DEP does not apply. DEP = Data Execution Prevention (hardware). One would expect it to work the same way, but it makes a difference for some applications. E.g. for all versions of UltraEdit 14 (14.2). It crashes at startup for DEP always on, at least on Microsoft Windows XP Professional Edition x64 edition. (2010-03-11: this problem has been fixed with UltraEdit 15.2 and later.) Update 1: I think this difference is caused by the backdoors that Microsoft has put into hardware DEP for OptOut, according to Fabrice Roux (see below). In the case of IrfanView, for which Steve Gibson observed the same difference as I did for UltraEdit (see below), the difference is caused by a non-DEP aware EXE packer (ASPack) that Microsoft coded a backdoor for. Is there a difference between Windows XP, Windows Vista and Windows 7 ? Is there a difference between 32 bit and 64 bit versions of Windows ? Sources: From [http://blog.fabriceroux.com/index.php/2007/02/26/hardware_dep_has_a_backdoor?blog=1], "Hardware DEP has a backdoor" by Fabrice Roux. 2007-02-26. "IrfanView was not using any trick to evade DEP ... Microsoft just coded a backdoor used only in OPTOUT. Bascially Microsoft checks the executable header for a section matching one of the 3 strings. If one these strings is found, DEP will be turned OFF for this application by windows. ... 'aspack', 'pcle', 'sforce'" From [http://www.grc.com/sn/sn-078.htm], by Steve Gibson. "I can’t find any documentation on Microsoft’s site anywhere, because we’re seeing a difference between always-on and opt-out. That is, you would imagine that always-on mode would be the same as opting out if you weren’t having any opt-out programs. It turns out it’s not the case. For example ... the IrfanView file viewer ... runs fine in opt-out mode, even if it has not been opted out. But it won’t launch, Windows blocks it from launching ... in always-on mode." From [http://www.grc.com/sn/sn-083.htm], by Steve Gibson. "... IrfanView ... won’t run with DEP turned on. It’s because it uses an EXE packer, an executable compression program called ASPack. And it makes sense that it wouldn’t because naturally an executable compressor has got to decompress the executable, so it allocates a bunch of data memory into which it decompresses the compressed executable, and then it runs it. Well, it’s running a data allocation, which is exactly what DEP is designed to stop. On the other hand, UPX, which is actually the leading and most popular EXE compressor, it’s DEP- compatible because those guys realized, hey, when we allocate this memory, we should mark the pages as executable."

    Read the article

  • Redmine with Apache 2 + Passenger nightmare --- site is up and available, but Redmine doesn't execute

    - by CptSupermrkt
    I was determined to figure this out myself, but I've been at it for a total of more than 10 hours, and I just can't figure this out. First, let me detail my environment (which I cannot change): Server version: Apache/2.2.15 (Unix) Ruby version: ruby 1.9.3p448 Rails version: Rails 4.0.1 Passenger version: Phusion Passenger version 4.0.5 Redmine version: 2.3.3 I have followed the Redmine instructions all the way through the test webserver to check that installation was successful with this command: ruby script/rails server webrick -e production The roadblock which I cannot overcome is getting Apache and Passenger to interpret and properly serve Redmine. I have searched pretty much every possible link within the first 10 pages or so of Google results. Everywhere I go I come across conflicting/contradicting/outdated information. We have a "weird" setup with Apache (which I inherited and cannot change). Redmine needs to be served through SSL, but Apache already has another website it's serving through SSL called Twiki. By "weird", what I mean is that our file structure is entirely different from all the tutorials out there on this version of Apache which have directories like "available-sites" and such. Here are the abbreviated versions of some of our config files: /etc/httpd/conf/httpd.conf (the global configuration file --- note that NO VirtualHost is defined here): ServerRoot "/etc/httpd" ... LoadModule passenger_module /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5/libout/apache2/mod_passenger.so PassengerRoot /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5 PassengerDefaultRuby /usr/local/pkg/ruby/1.9.3-p448/bin/ruby Include conf.d/*.conf ... User apache Group apache ... DocumentRoot "/var/www/html" So just to clarify, the above httpd.conf file does NOT have a VirtualHost section. /etc/httpd/conf.d/ssl.conf (defines the VirtualHost for ssl): Listen 443 <VirtualHost _default_:443> SSLEngine on ... SSLCertificateFile /etc/pki/tls/certs/localhost.crt </VirtualHost> /etc/httpd/conf.d/twiki.conf (this works just fine --- note this does NOT define a VirtualHost): ScriptAlias /twiki/bin/ "/var/www/twiki/bin/" Alias /twiki/ "/var/www/twiki/" <Directory "/var/www/twiki/bin"> AllowOverride None Order Deny,Allow Deny from all AuthType Basic AuthName "our team" AuthBasicProvider ldap ...a lot of ldap and authorization stuff Options ExecCGI FollowSymLinks SetHandler cgi-script </Directory> /etc/httpd/conf.d/redmine.conf: Alias /redmine/ "/var/www/redmine/public/" <Directory "/var/www/redmine/public"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> The amazing thing is that this doesn't completely NOT work: I can successfully open up https://someserver/redmine/ with SSL and the https://someserver/twiki/ site remains unaffected. This tells me that it IS possible to have two separate sites up with one SSL configuration, so I don't think that's the problem. The problem is is that it opens up to the file index. I can navigate around my Redmine file structure, but no code ever gets executed. For example, there is a file included with Redmine called dispatch.fcgi in the public folder. https://someserver/redmine/dispatch.fcgi opens, but just as plain text code in the browser. As I understand it, in the case of using Passenger, CGI and FastCGI stuff is irrelevant/unused.

    Read the article

  • Apache cyclic redirection problem

    - by slicedlime
    I have an extremely weird problem with one of my sites. I run a number of blogs off a single apache2 server with a shared wordpress install. Each site has a www.domain.com main domain, but a ServerAlias of domain.com. This works fine for all the blogs except one, which instead of redirecting to www.domain.com redirects to domain.com, causing a cyclic redirection. The configuration for each host looks like this: <VirtualHost *:80> ServerName www.domain.com ServerAlias domain.com DocumentRoot "/home/www/www.domain.com" <Directory "/home/www/www.domain.com"> Options MultiViews Indexes Includes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> As this didn't work, I tried a mod_rewrite rule for it, which still didn't redirect correctly. The weird thing here is that if i rewrite it to redirect to any other domain it will redirect correctly, even to another subdomain. So a rewrite rule for domain.com that redirects to foo.domain.com works, but not to www.domain.com. In the same way, trying to redirec to www.domain.com/foo/ ends me up with a redirection to domain.com/foo/. Even weirder, I tried setting up domain.com as a completely separate virtual host, and ran this php test script as index.php on it: <?php header('Location: http://www.domain.com/' . $_SERVER["request_uri"]); ?> Hitting domain.com still redirects to domain.com! Checking out the headers sent to the server verifies that I get exactly the redirect URL I wanted, except with the "www." stripped. The same script works like a charm if I replace www. with foo or redirect to any other domain. This has now foiled me for a long time. I've diffed the vhosts configs for a working domain and the faulty one, and the only difference is the domain name itself. I've diffed the .htaccess files for both sites, and the only difference is a path related to the sharing of wordpress installation for the blogs: php_value include_path ".:/home/www/www.domain.com/local/:/home/www/www.domain.com/" I searched through everything in /etc (including apache conf) for the domain name of the faulty host and found nothing weird, searched through everything in /etc for one of the working ones to make sure it didn't differ, I even went so far to check on the DNS setup of two domains to make sure there wasn't anything strange going on. Here's the response for the faulty one: user@localhost dir $ wget -S domain.com --2010-03-20 21:47:24-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:47:24 GMT Location: http://domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://domain.com/ [following] And a working one: user@localhost dir $ wget -S domain.com --2010-03-20 21:51:33-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:51:33 GMT Location: http://www.domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://www.domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://www.domain.com/ [following] I'm stumped. I've had this problem for a long time, and I feel like I've tried everything. I can't see why the different domains would act differently under the same installation with the same settings. Help :(

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >