Search Results

Search found 1685 results on 68 pages for 'no more guessing'.

Page 41/68 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Phone number in meta description bad or good for local rankings NAP

    - by bybe
    Once again I'm at it with increasing people's local rankings and I've learnt so much about local rankings in the past 2 weeks it feels like my brain is gonna pop anyway, question is fairly simple for someone who engages in local rankings and I appreciate the question may be a little guess work but isn't SEO mostly guessing anyway? From what I've read and learned that Google works of a system called nap for local rankings (With many other factors but this question is purely based on NAP). For people who care about local rankings NAP stands for Name of Business / Address of Business / Phone Number for Business. Now what what I've read you don't need the whole NAP to be on one website, a P or just a N can help towards your local rankings. It's believed that NAP rewards more than just P and N for example but knowing Google they might have a diversity checker which is my concern what your get to in a moment. Now of course sites weight differently where your business is posted, it's certainly going to be more credible if your NAP details are on your national phone book than say a blog site, so taking in this consideration too. Pure Guess (Not apart of the question but none the less makes a good read on my belief). Now my guess work would make me believe that the formula would look something like (N)+(A)+(P)x(T) So (N)name would be 1 or 0 to indicate present or not So (A)dress would be 1 or 0 to indicate present or not So (P)hone would be 1 or 0 to indicate present or not So (T)rust would be 1-100 to indicate level of trust So a phone number appearing on youtube might look something like 0+0+1x95= 95 and a NAP appearing on your national phone book might look something like 1+1+1x100= 300 Please note that I'm not saying this is the sole factor and I'm sure its way more complex that this with things like other factors on the page, off the page (Reviews, Links, Clicks) and so on but its still a contributor). The Question My question is fairly simple and I'd imagine hard to impossible to have an actual definite answer to this but maybe someone has seen official wording else where on this, is it bad to include address or phone number in the Meta Description? The reason I ask is that one of my competitors has these elements in the meta descriptions and their local rankings are absolutely superb, the problem I have with this is scrap bot sites like 'Similar Too' 'Seo Rankings' and 1,000's of the other scrap box networks that scrap site and then make urls with your site information are mostly limited to your meta description what this means that your phone number, address and sometimes even your company name if the domain is exact will appear as AP, and even NAP on thousands of websites. So, is it a bad strategy to include phone number and address in meta description, everything I read into would suggest its good of course with the downside of maybe lowering quality of description for click thoughts but top rankings would increase this 10 folds anyhow..

    Read the article

  • Windows8, JavaScript and HTML5 - A good thing?

    - by Albers
    Most of us have seen the Windows 8 news regarding support for native HTML5/JavaScript applications. The press has pushed this as a potential threat to the .NET developer community because JavaScript and HTML5 were called "our new developer platform". The press release refers to "Web-connected and Web-powered apps built using HTML5 and JavaScript that have access to the full power of the PC.".Microsoft has also been hush on details related to these comments. Before we buy the hype and start worrying about a world where we drop our Visual Studio licenses and buy DreamWeaver - let's think about how Windows 8 HTML/JavaScript applications would be implemented. The HTML5 spec offers support for offline applications, but this won't offer the OS-integrated experience the press release refers to. MS has to be planning a way to extend access beyond the traditional JavaScript feature set. Microsoft has a similar option today: HTML Applications or HTAs. They come close to required features, but HTAs need ActiveX or Java integration to provide the promised OS-level access. I'm guessing that Microsoft's future OS strategy isn't built on developers cranking out ActiveX controls or Java applets. So where is Microsoft headed? One possibility is that MS builds a new JavaScript framework from the ground up outside their current APIs. Another idea would be for Microsoft to add support for JavaScript as a first class .NET language using the Dynamic Language Runtime. A solution based on the DLR could be integrated into an HTA-like model to provide the promised access, along with the full range of features in .NET Framework. Security comes included in the Framework. And the work necessary to support this integration would tie in nicely with the effort MS has recently made providing better JavaScript and HTML5 support in Visual Studio 2010. As a bonus, a full-fledged JavaScript DLR implementation would allow single language web solutions across client and server (think node.js) and would appeal to developers who are familiar with JavaScript but have less experience with the Microsoft tech stack. We will all get a better picture after the Build conference in September. But in the mean time we know that Microsoft has a reputation for providing strong developer support. We might want to reserve our harshest judgement and consider that the press release could hint at new opportunities for .NET development.

    Read the article

  • Black screen on latest Nvidia Cards when starting LightDM/Ubuntu

    - by Luis Alvarado
    Today I installed an Nvidia GT440 on my computer, changing the one that existed there, an Nvidia 9500GT. After changing it I started getting a problem where the screen just went black when loading the lightdm login screen (Where I punt my user and password). The thing is, if I disconnect the VGA cable and connect it again I get to see the lightdm greeter and everything works perfect. The problem is that I have to connect/disconnect every time I reboot the PC. I tried installing the 285.xx drivers. Same problem. I removed the Nvidia drivers installed with Jockey, rebooted, same problem. I install the current 280.xx again, same problem. After all that I installed a fresh install of Ubuntu, selected to install the Nvidia drivers while installing it from the livecd. After booting the same problem appeared. Dmesg does not say anything wrong about it. Same goes for the log from Jockey. What else should I check or what to do to solve it. Just to clarify, this does not happen BEFORE the lightdm greeter appears. Am guessing before the actual use of the video card with X starts with all the 2D/3D stuff that is used in ligthdm and unity. I can use any tty and even see the Ubuntu logo when starting. UPDATE: When I open a game in fullscreen the problem appears again. I have to unplug the monitor cable and plug it back in to see the game. Then when I quit the game I have to do it again to see the desktop. UPDATE 2: Today I bought a HDMI cable, connected the video card to the TV am testing it with and it actually did log in correctly without any black screen but it shows the resolution a little over the actual size of the screen. So I see only half of the launcher since the left side of it is hidden outside of the real resolution and the top bar is beyond the resolution. So the black screen is related to the VGA connection.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

  • PASS Summit 2011: Save Money Now

    - by Bill Graziano
    Register by March 31st and save $200.  On April 1st we increase the price.  On July 1st we increase it again.  We have regular price bumps all the way through to the Summit.  You can save yourself $200 if you register by Thursday. In two years of marketing for PASS and a year of finance I’ve learned a fair bit about our pricing, why we do this and how you react to it.  Let me help you save some money! Price bumps drive registrations.  We see big spikes in the two weeks prior to a price increase.  Having a deadline with a cost attached is a great motivator to get people to take action. Registering early helps you and it helps PASS.  You get the exact same Summit at a cheaper rate.  PASS gets smoother cash flow and a better idea of how many people to expect.  We also get people that are already registered that will tell their friends about the conference. This tiered pricing lets us serve those that are very price conscious.  They can register early and take advantage of these discounts.  I know there are people that pay for this conference out of their own pockets.  This is a great way for those people to reduce the cost of the conference.  (And remember for next year that our cheapest pricing starts right after the Summit and usually goes up around the first of the year.) We also get big price bumps after we announce the program and the pre-conference sessions.  If you wrote down the 50 or so best known speakers in the SQL Server community I’m guessing we’ll have nearly all of them at the conference.  We did last year.  I expect we will this year too.  We’re going to have good sessions.  Why wait?  Register today. If you want to attend a pre-conference session you can always add it to your registration later.  Pre-con prices don’t change.  It’s very easy to update your registration and add a pre-conference session later. I want as many people as possible to attend the Summit.  It’s been a great experience for me and I hope it will be for you.  And if you are going to go, do yourself a favor and save some money.  Register today!

    Read the article

  • Loose Coupling and UX Patterns for Applications Integrations

    - by ultan o'broin
    I love that software architecture phrase loose coupling. There’s even a whole book about it. And, if you’re involved in enterprise methodology you’ll know just know important loose coupling is to the smart development of applications integrations too. Whether you are integrating offerings from the Oracle partner ecosystem with Fusion apps or applications coexistence scenarios, loose coupling enables the development of scalable, reliable, flexible solutions, with no second-guessing of technology. Another great book Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions tells us about loose coupling benefits of reducing the assumptions that integration parties (components, applications, services, programs, users) make about each other when they exchange information. Eliminating assumptions applies to UI development too. The days of assuming it’s enough to hard code a UI with linking libraries called code on a desktop PC for an office worker are over. The book predates PaaS development and SaaS deployments, and was written when web services and APIs were emerging. Yet it calls out how using middleware as an assumptions-dissolving technology “glue" is central to applications integration. Realizing integration design through a set of middleware messaging patterns (messaging in the sense of asynchronously communicating data) that enable developers to meet the typical business requirements of enterprises requiring integrated functionality is very Fusion-like. User experience developers can benefit from the loose coupling approach too. User expectations and work styles change all the time, and development is now about integrating SaaS through PaaS. Cloud computing offers a virtual pivot where a single source of truth (customer or employee data, for example) can be experienced through different UIs (desktop, simplified, or mobile), each optimized for the context of the user’s world of work and task completion. Smart enterprise applications developers, partners, and customers use design patterns for user experience integration benefits too. The Oracle Applications UX design patterns (and supporting guidelines) enable loose coupling of the optimized UI requirements from code. Developers can get on with the job of creating integrations through web services, APIs and SOA without having to figure out design problems about how UIs should work. Adding the already user proven UX design patterns (and supporting guidelines to your toolkit means ADF and other developers can easily offer much more than just functionality and be super productive too. Great looking application integration touchpoints can be built with our design patterns and guidelines too for a seamless applications UX. One of Oracle’s partners, Innowave Technologies used loose coupling architecture and our UX design patterns to create an integration for a customer that was scalable, cost effective, fast to develop and kept users productive while paving a roadmap for customers to keep pace with the latest UX designs over time. Innowave CEO Basheer Khan, a Fusion User Experience Advocate explains how to do it on the Usable Apps blog.

    Read the article

  • Migrating to Natty (or any other future versions of ubuntu)

    - by Nik
    I am hoping that this question would help other ubuntu users when migrating to a newer version of ubuntu. This should have all the info that they need. So please when you answer try to phrase them into points for easy understanding. I understand that some questions that I ask might have been asked before by other users. In that case just provide the links to those questions. I am running ubuntu 10.10 Maverick Meerkat in case that is important. I can say for sure that a clean install is definitely better than an upgrade since it gives you an opportunity to clean your system and get a fresh start. However some of us like to retain certain software configuration or files etc. The questions are as follows, How do you save the configuration files of certain application like for instance Thunderbird, firefox, etc...so that you can basically paste in the new version of ubuntu? (Thunderbird for instance has all my mail, so I definitely would like to backup its configuration and then use it the new installation that I do) I have some applications like MATLAB and Maple (Based on JAVA) installed. When I migrate, can I just copy the entire installation folder to the new version of ubuntu? Would it still work as now if I do that? When doing a backup which folders should be backed up? Obviously your personal files would be backup. But other than that, is it necessary to back up stuff in the home folder, /usr/bin etc? I have BURG installed. I am guessing that would be erased when I do a clean install along with the program's configuration and everything. How can I do a backup of it? I am dual booting my ubuntu alongside with Windows 7. When I perform the clean install of ubuntu, would GRUB (bootloader) be removed and in anyway jeopardize my windows installation? Over time I have added a lot of PPA which are of course compatible with my current ubuntu version. How do I make a backup of all my PPA and would they be compatible to the newer version of ubuntu when I restore them? I hope this covers all the questions or doubts that a user might face when thinking about performing a clean install of his system. If I missed anything please mention it as a comment and I will add it to my answer.

    Read the article

  • Yet another frustum culling question

    - by Christian Frantz
    This one is kinda specific. If I'm to implement frustum culling in my game, that means each one of my cubes would need a bounding sphere. My first question is can I make the sphere so close to the edge of the cube that its still easily clickable for destroying and building? Frustum culling is easily done in XNA as I've recently learned, I just need to figure out where to place the code for the culling. I'm guessing in my method that draws all my cubes but I could be wrong. My camera class currently implements a bounding frustum which is in the update method like so frustum.Matrix = (view * proj); Simple enough, as I can call that when I have a camera object in my class. This works for now, as I only have a camera in my main game class. The problem comes when I decide to move my camera to my player class, but I can worry about that later. ContainmentType CurrentContainmentType = ContainmentType.Disjoint; CurrentContainmentType = CamerasFrustrum.Contains(cubes.CollisionSphere); Can it really be as easy as adding those two lines to my foreach loop in my draw method? Or am I missing something bigger here? UPDATE: I have added the lines to my draw methods and it works great!! So great infact that just moving a little bit removes the whole map. Many factors could of caused this, so I'll try to break it down. cubeBoundingSphere = new BoundingSphere(cubePosition, 0.5f); This is in my cube constructor. cubePosition is stored in an array, The vertices that define my cube are factors of 1 ie: (1,0,1) so the radius should be .5. I least I think it should. The spheres are created every time a cube is created of course. ContainmentType CurrentContainmentType = ContainmentType.Disjoint; foreach (Cube block in cube.cubes) { CurrentContainmentType = cam.frustum.Contains(cube.cubeBoundingSphere); ///more code here if (CurrentContainmentType != ContainmentType.Disjoint) { cube.Draw(effect); } Within my draw method. Now I know this works because the map disappears, its just working wrong. Any idea on what I'm doing wrong?

    Read the article

  • Android Array Lag?

    - by Mike
    I am making a platform game for Android. It is sort of a tile based game. I added bullets and enemies with AI and a bunch of tile types. I created a simple map with no Enemies. Everything was running well and smooth until I shot a bunch of bullets randomly everywhere. A couple of hundreds of bullets later, the FPS lowered. I made a test to find out if the bullets were the problem so I made another simple map with just a tile to stand on and left it for a while. Minutes later, I played around with it a bit to check if the FPS changed and it didnt. I reloaded the same map and shot a lot of bullets. Minutes later, the FPS was visibly lower even after the number of bullets were zero. Points to note: Programmed FPS is 30 Tested on a Samsung Galaxy Y and Samsung Galaxy W Any tile, enemy, bullet that is off screen is not drawn to prevent lag Bullets collide with Tiles (if they dont collide with in 450 frames, they are removed from the array) I used List bullets = new ListArray(); I used bullets.add(new Bullet(x, y, params...)); I used for(...){ if(...){ bullets.remove(i); } } Code for bullet: private void drawBullets(Canvas canvas) { for (int i = 0; i < bullets.size(); i++) { Bullet b = bullets.get(i); b.update(canvas); //updates physics if (b.t > blm) { //if the bullet is past its expiry bullets.remove(i); i--; } else { if (svx((b.x)) > 0 && svx(b.x) < width && svy((b.y)) > 0 && svy(b.y) < height) { // if bullet is not off screen b.draw(canvas); // draw the bullet } } } } I tried searching for solutions and references but I have no luck. I'm guessing that the lag has something to do with the Array and the Bullets or Classes that I've loaded? I'm not sure! Someone please help! Thanks in advance! :)

    Read the article

  • How do I prevent Ubuntu gradually slowing down to a stop?

    - by user29165
    I really do not know what is causing the problem of my desktop environment (D.E.) gradually slowing down. It seems to happen in Gnome 3.2, LXDE, XFCE, and KDE. I am running Ubuntu 11.10 and this problem started happening from a fresh install. I have noticed that if I restart Gnome or otherwise re-log-in with any of the D.E.s the speed resets to normal. I don't seem to have and memory leaks that account for it and there are not any processes that are using excessive CPU cycles. Due to the symptoms I am guessing that there is an underlying package that interacts with all D.E.s that is the source of the problem. Just to clarify, in extreme situations, if I let the slowdown continue without a restart of the D.E. my system will finally get to a point where everything, even my clock, freezes. The time over which Gnome slows down (noticeably after 17 hrs) is much less than LXDE, or XFCE but eventually they freeze as well. I didn't mention Unity since I haven't used it enough to verify the problem, but I don't see why it should be different. Just in case this is relevant, I Suspend my computer rather than turn it off. I do this because of the greatly reduced load time and the 17 hrs I stated above is actual up-time, not time where the D.E. is actually active. However, the slow down does not seem to be affected by how long the computer has been in suspend mode. I know that it is possible that this problem is due to an interaction between two or more of the applications that I use and as such it may not be able to be duplicated by others. In the end I am just wondering (a) are other people experiencing this issue and (b) does anyone have some advice on where to start looking for a solution if one is not already known. I am even open to the idea of switching to the beta release of 12.04 if anyone thinks it will solve the problem. Edit: I took a video of my latest freeze that you can watch at http://youtu.be/flKUqUzCmdE, sorry about the audio. Edit: Since that video I checked my memory for errors and tried installing the proprietary video drivers. No memory errors were found and the proprietary video drivers made the display unusable so I had to uninstall them. Any thoughts?

    Read the article

  • Are there software options (preferabbly .NET) for doing distance and speed analysis of footballers moving on video?

    - by Anonymous Type
    Editing Question for Clarity Thanks for feedback so far, very insightful. I'm not sure how far along this part of the software community is, and what if any libraries exist for me to leverage from. Heres what I'm trying to do. Problem: Take an existing video of a game of rugby league. The Rugby League field is 100 metres long, 70 metres wide, and has white line markings every 10 metres running along the width of the field, as well as along the sidelines. Each side has 13 players on the field. Players on each team have identical jerseys that normally constrast strongly against background colours (green/brown field colour) and the referee's colour (usually yellow) and the designated water runner (orange). All players have a unique number in thick white lettering on their backs for identification. Video is taken with a high definition camera. Currently only one camera is used (2D) and existing video does not contain a foreground object of fixed spatial dimensions (as suggested in one answer for comparision measurements, however I could add this to future filming sessions if it is worthwhile). The player's do not run in a straight line 50% of the time but will go sideways on on a diagonal to the play the ball. The distance measured always starts from the spot of the previous "tackle", which ends where the player stops forward movement. It is not always possible to determine the players number from the video (facing other direction, sunlight, others standing in the way of the camera). But this isn't important as the software could allow for manual inputting of unknown "runs" at a later point after analysis. Determine the distance between two points (i.e. where the player started his "run" and where he finished it). I'm guessing that this would be quite doable if I manually marked the start and end point in the video. But how would I use landmarks in the background to determine the distance (assuming the person taking the video has kept it from jerking around). Question: Do software packages or libraries exist that are specialised enough to assist with writing analysis software to determine a sports persons distance travelled based on video taken of the performance?

    Read the article

  • Help with DB Structure, vOD site

    - by Chud37
    I have a video on demand style site that hosts series of videos under different modules. However with the way I have designed the database it is proving to be very slow. I have asked this question before and someone suggested indexing, but i cannot seem to get my head around it. But I would like someone to help with the structure of the database here to see if it can be improved. The core table is Videos: ID bigint(20) (primary key, auto-increment) pID text airdate text title text subject mediumtext url mediumtext mID int(11) vID int(11) sID int(11) pID is a unique 5 digit string to each video that is a shorthand identifier. Airdate is the TS, (stored in text format, right there maybe I should change that to TIMESTAMP AUTO UPDATE), title is self explanatory, subject is self explanatory, url is the hard link on the site to the video, mID is joined to another table for the module title, vID is joined to another table for the language of the video, (english, russian, etc) and sID is the summary for the module, a paragraph stored in an external database. The slowest part of the website is the logging part of it. I store the data in another table called 'Hits': id mediumint(10) (primary key, auto-increment) progID text ts int(10) Again, here (this was all made a while ago) but my Timestamp (ts) is an INT instead of ON UPDATE CURRENT TIMESTAMP, which I guess it should be. However This table is now 47,492 rows long and the script that I wrote to process it is very very slow, so slow in fact that it times out. A row is added to this table each time a user clicks 'Play' on the website and then so the progID is the same as the pID, and it logs the php time() timestamp in ts. Basically I load the entire database of 'Hits' into an array and count the hits in each day using the TS column. I am guessing (i'm quite slow at all this, but I had no idea this would happen when I built the thing) that this is possibly the worst way to go about this. So my questions are as follows: Is there a better way of structuring the 'Videos' table, is so, what do you suggest? Is there a better way of structuring 'hits', if so, please help/tell me! Or is it the fact that my tables are fine and the PHP coding is crappy?

    Read the article

  • Help understanding my hard drive / partitioning situation... Pictures Included! :)

    - by xopenex
    So I have installed windows 7, and two different distros of linux... I have read and tried to understand things like "spanned" "extended" "primary" "swap" "dev/dev2/" "GRUB" "Windows Boot Loader/Manager" etc.... I have a very very limited understanding of all of it! :) I am trying to figure out how to get all OS boot options on one Boot manager (I'm thinking it will be GRUB), because at this point when i turn on my computer, I basically get two booting options (excluding the memtest options etc)... One options is to boot one of my Linux Distros and the second option is to boot my Windows 7. When i go with the first option, Linux boots up... when i go with the second Windows 7 option, I get the "windows boot manager screen" and I can choose Windows 7 or my other installation of Linux (Ubuntu)... In addition, I did not have swap partition from my first installation of Linux, I created it during the installation of my second distro... This is a lot of info for me, but I'm guessing that you linux Gurus, pretty much understand what is going on! Hope my question makes sense.. i will try and simplify... Can i get all 3 OS's optioned to boot from one GRUB? Can i get both Linux distros to use one swap file (I have seen this possible in other threads, but because of how my disk is partitioned, i dont know if i can do this) I hope that i dont have to start all over installing one after the other. Ive got some pics that may help understand my hard drive situation! Thanks guys! :) EDIT... i had some pics, but im a new member.. so cant post them... :( here is a description of the pics... incase i can email them or post later. [grub][3] First Screen I come to after turning on computer... "Ubuntu with linux 3.2.6" (highlighted) fires up Linux perfectly... other choice at bottom of list "Windows 7 (loader) (on dev/sda1)... brings me to the next picture below.. windows boot manager [win boot mngr][6] both options here load the os selected [Disk Manager Windows][1] picture of my hard drive situation through windows disk manager utility [gparted][2] picture of my hard drive situation through "gparted" [mycomp][4] picture of my hard drive situation through "my computer" [paragon][5] one last pic of my hard drive situation through the eyes of "paragon"

    Read the article

  • TortoiseSVN hangs in Windows Server 2012 Azure VM

    - by ZaijiaN
    Following @shanselman's article on remoting into an Azure VM for development, I spun up my own VS 2013 VM, and that image runs on WS 2012. Once I was able to remote in, I started installing all my dev tools, including Tortoise SVN 1.8.3 64bit. Things went south once I started attempting to check out code from my personal svn server. It would hang and freeze often, although sometimes it would work - I was able to partially check out projects, but I would get frequent connection time out errors. My personal svn server (VisualSVN 2.7.2) runs at home on a windows 7 machine, and I have a dyndns url pointing to it. I have also configured my router to passthrough all 443 traffic to the appropriate port on the server. I self-signed a cert and made sure it was imported into the VM cert store under trusted root authorities. I have no problems connecting to my svn server from 4-5 other computers & locations. From the Azure VM, in both IE and Chrome, I can access the repository web browser with no issues. There are no outbound firewall restrictions. I have installed other SVN add-ons for Visual Studio (AnkhSVN, VisualSVN) and attempted to connect with my svn server, with largely the same results - random and persistent connection issues (hangs/timeouts). I spun up a completely fresh WS 2008 Azure VM, and installed TortoiseSVN, and had the same results. So I'm at a loss as to what the problem is and how to fix it. Web searches on tortoisesvn and windows server issues doesn't yield any current or relevant information. At this point, i'm guessing that maybe some setting or configuration that MS Azure VM images is the culprit - although I should probably attempt to spin up my own local WS VM to rule out that it's a window server issue. Any thoughts? I hope I'm just missing something really obvious!

    Read the article

  • Setting up Tomcat6 properly in Ubuntu 10.04

    - by aasukisuki
    We have a Tomcat6 instance running on Ubuntu 10.04LTS. Our test box was just a Windows machine running Tomcat6. Both machines (Linux and Windows) have 1GB of ram. Via the Tomcat configuration tool in windows, I was able to set the min/max/permgen sizes of the JVM. Those were set to 256/512/128 respectively. Now on the Ubuntu box, I've tried setting the JVM options in several different places including: Adding JAVA_OPTS & CATALINA_OPTS in /etc/environment Adding JAVA_OPTS in $CATALINA_HOME/bin/catalina.sh Creating setenv.sh and adding JAVA_OPTS in $CATALINA_HOME/bin Adding JAVA_OPTS directly to /etc/init.d/tomcat6 Un-commenting the JAVA_OPTS and modifying it in /etc/default/tomcat6 Nearly all of those methods did not work, except for modifying /etc/init.d/tomcat6 directly (and possibly the /etc/default/tomcat6 change, but I just did that). However, my understanding is that when you change these settings, only one JVM should be used for the entire tomcat6 instance, and that memory is shared among the applications. On our windows box, tomcat6 is run as a service, and appears to behave this way. However, when I look at htop on the linux box, there are 20+ tomcat6 instances (I have an app that triggers internal jobs every X seconds using chron, so maybe these are threads? Or are they actual instances) all with those memory settings. The app runs fine for a bit, but eventually ends up locking up. I'm guessing each of these apps thinks it has 512m to work with and never GC's and then locks tomcat up completely. What is the proper way to set all of this up?

    Read the article

  • Windows 2008 Terminal Services "Easy Print" and Matrix Printers

    - by Cesar87
    Server: Windows 2008 Server Standard SP2 with "Terminal Services" role Clients: Windows XP SP3 + .NET 3.5 Framework SP1 + Remote Desktop Client 7.0 We are using "Easy Print" feature which allows programs running on server to "see" printers installed on client machines. Everything works fine, EXCEPT when we send a text-only output to a dot-matrix printer. In this case, the printer only outputs a blank page. At first, we had problems with the error "Windows Presentation Foundation Terminal Server Print W has encountered a problem and needs to close." but this was fixed by replacing TsWpfWrp.exe with the one from Vista SP1 as suggested here. But now, we only get a blank page! Every other (graphical) document we sent to printer works 100%. We also tried to use the "Generic text-only" driver, but the result is same. Now we are trying to change parameters like print processor on "advanced" tab from printer driver to see if something happen. But this is just guessing and we really don't know what to try anymore. The problem appears to be on Easy Print driver, but we found almost no resources about it. Any tips are welcome.

    Read the article

  • OSX esc key stops working (randomly)

    - by Jan Hancic
    I have a 2011 macbook Air with OS X Mountain Lion (I've upgraded from Lion). And ever since upgrading my escape key randomly stops working. And it's not that the keyboard on the laptop would be damaged as the same happens if I use apple's bluetooth keyboard. So I'm guessing it's a software issue. Also in some cases pressing ctrl+esc achieves the same thing as just esc so I'm 100% it's not a hardware problem. Does anybody have any idea what this might be all about and how to fix it? edit: the escape key stops working completely but it starts working again after I restart the computer. edit2: this usually happens after the computer wakes from sleep. So it's not like that is just stopps working in the middle of using it, but rather after I put it to sleep (or just close the lid) and then open it again. Has nobody got the same issue? Is there a better place to ask OSX related questions than SuperUser maybe?

    Read the article

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • PECL install error after upgrading to OSX 10.8

    - by Clive
    I've just upgraded my OS to Mountain Lion and PECL is no longer working (it's on a test drive so no drama, but I'd like to get it working so I can upgrade the OS on my shiny new SSD as well). I'm using the native PHP installation, no macports/homebrew or anything like that. Running sudo pecl install uploadprogress (for example) produces the following terminal output: downloading uploadprogress-1.0.3.1.tgz ... Starting to download uploadprogress-1.0.3.1.tgz (9,040 bytes) .....done: 9,040 bytes 4 source files, building running: phpize grep: /usr/include/php/main/php.h: No such file or directory grep: /usr/include/php/Zend/zend_modules.h: No such file or directory grep: /usr/include/php/Zend/zend_extensions.h: No such file or directory Configuring for: PHP Api Version: Zend Module Api No: Zend Extension Api No: autom4te: need GNU m4 1.4 or later: /usr/bin/m4 ERROR: `phpize' failed I'm guessing the problem is the 3 grep lines. I've found several threads that suggest this is caused by XCode not being installed...but XCode is installed, and updated to the latest version (4.4). All the relevant symlinks to /Developer/usr/bin/* also exist as they should. m4 is currently at version: m4 (GNU M4) 1.4.13, so even though the output above contains a line pertaining to it, I don't think that can be the problem. I'm sure it's just a simple issue, anyone got any clues?

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • Can't connect to LAN when connected to D-Link DIR-615

    - by Senseful
    I'm have a D-Link DIR-615 Wireless N 300 Router. I didn't use the CD it comes with to set up the network. Instead I configured it manually through the router's settings that are accessed via a web browser. The main changes I made are: Secured the router so that a password is required before clients can use the wireless internet. Broadcasting 802.11N only (not B or G). I can connect to the router just fine and I'm able to access the internet. The only problem is that I don't see any of the other computers in my LAN. When I try connecting to another Wi-Fi router that I have (which is connected to the same network), I can see all of the computer's on my LAN just fine. Therefore, I'm guessing that the reason I can't connect to the LAN is not a problem with my computer and is a problem with the router instead. I'm on a MacBook Air running Mac OS X 10.6.6. I tried contacting D-Link technical support, but they only try to help you if you have problems connecting to the internet. They aren't really concerned with problems that have to do with the accessing PC's on the same network.

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • Why does my Belkin wireless router has eMule port open?

    - by Jeremy Powell
    I have a Belkin F6D4230-4 v1 router. When I port scan it with nmap I get the following: $ sudo nmap -sS -A -T5 192.168.2.1 -p- Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-17 11:40 CDT Interesting ports on 192.168.2.1: Not shown: 65532 closed ports PORT STATE SERVICE VERSION 80/tcp open http Belkin 2307 wifi router http config (IP_SHARER httpd 1.0) |_ html-title: '+i1+' 4661/tcp filtered unknown 4662/tcp filtered edonkey MAC Address: 00:22:75:5D:52:D8 (Belkin International) Device type: WAP|broadband router|firewall|printer|specialized|webcam Running (JUST GUESSING) : Linksys embedded (95%), TRENDnet embedded (95%), Netgear embedded (92%), Canon embedded (89%), On Time RTOS (89%), Symantec embedded (89%), D-Link embedded (86%), Polycom embedded (85%) Aggressive OS guesses: Linksys WRT54GC or TRENDnet TEW-431BRP wireless broadband router (95%), TRENDnet TW100-BRF114 broadband router (95%), Netgear FR114P ProSafe VPN firewall (92%), Canon PIXMA MX850 printer (89%), On Time RTOS (89%), Symantec Firewall/VPN 100 (89%), D-Link DI-714P+ wireless broadband router (86%), Polycom ViewStation video conferencing system (85%) No exact OS matches for host (test conditions non-ideal). Network Distance: 1 hop Service Info: Device: WAP OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 21.57 seconds Why are the 4461 and 4462 ports open? This is a basic, out-of-the-box installation.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >