Search Results

Search found 3861 results on 155 pages for 'assignment operator'.

Page 119/155 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • SharePoint 2013 Developer Ramp-Up - Part 1

    Today I had a little spare time during the morning hours and I decided that after checking MVA that I'm going to query the available course material over at Pluralsight. Wow, thanks to fantastic corporations and acquisitions there are lots of courses available. Nicely split by SharePoint version as well as particular interest group. Additionally, I found a couple of online blogs and community sites that I'm going to visit regularly during the next couple of weeks. Today's resource(s) Of course, I'm "all in" for the latest developer resources: SharePoint 2013 Developer Ramp-Up - Part 1 - Understanding the Platform and Developer Experience SharePoint 2013 Developer Ramp-Up - Part 2 SharePoint 2013 Developer Ramp-Up - Part 3 SharePoint 2013 Developer Ramp-Up - Part 4 SharePoint 2013 Developer Ramp-Up - Part 5 SharePoint 2013 Developer Ramp-Up - Part 6 I guess, I'm going to stick to the Pluralsight library until the end of this week. We'll see... Anyway, apart from the video material I came across a couple of other websites which I'd like to list here, too. That's mainly for personal reference instead of bookmarking in the browser, I'll use my own blog for that purpose. Atkinson's SharePoint Blog Düsseldorfer Jung Doerflers SharePoint Blog SharePoint Community Absolute SharePoint The links are in no preferential order and I added them as soon as I found them. Most probably, I'm going to report about specific articles from those resources during this challenge. So, stay tuned and I try to provide more details on certain topics. Takeaway First contact with the 'real stuff' in order to get an idea about software development in Microsoft SharePoint and beyond. Unfortunately and as already expected, the marketing department over at Microsoft seemed to have nothing better to do than to invent new names and baptise literally the same product with every release. Luckily, the release cycles between versions have been three years (roughly) - 2007, 2010, and 2013. Nonetheless, there will be a lot of version-specfic issues to tackle during this learning phase. Especially, when it's about historical expressions like 'WSS'* like I had it yesterday... It's going to be exciting and demanding to catch up with roughly 6-7 years of development and changes. Okay, let's face it. * WSS stands for Windows SharePoint Services 3.0 which forms the 'core engine' of SharePoint 2007. Part 1 of Andrew Connell's series on SharePoint 2013 for developers provides a brief history and overview of the various product names and their relation to the actual SharePoint version. I guess, I might create a cheat-sheet or something comparable in order to reduce the level of confusion while reading through other material: SharePoint 2007 (aka SharePoint v3 aka SharePoint 12) Windows SharePoint Services (WSS) 3.0 Microsoft Office SharePoint Server (MOSS) 2007 .NET Framework 3.0, 32-bit or 64-bit OS SharePoint 2010 (aka SharePoint v4 aka SharePoint 14) Microsoft SharePoint Foundation (SPF) 2010 Microsoft SharePoint Server (SPS) 2010 .NET Framework 3.5 SP1, 64-bit OS only SharePoint 2013 Microsoft SharePoint Foundation (SPF) 2013 Microsoft SharePoint Server (SPS) 2013 .NET Framework 4.5, 64-bit OS only After this quick excursion it is getting more interesting. SharePoint 2013 has a number of Development Practices and Techniques under the hood, and it will be quite a decision process depending on the task requirements to choose the correct path to go. At the moment, the following two options seem to be my future fields of operation: Client-Side Object Model (CSOM) REST API and OData syntax As part of my job assignment, I see myself developing within Visual Studio 2012/2013. Most probably the client development in C# will be using CSOM but of course I'll keep an eye on the REST API, too. JavaScript has quite a momentum since a while and it would a shame to ignore this type of opportunity and possibilities.

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Restore passwd for root on a server

    - by s.mihai
    Hello,       I have a DVR server with linux embeded. It has some telnet functions but i don't have the password for it (the chinese manufacturer refuses to give me the password). I did get a upgrade folder from them and found a passwd file inside.       So i assume that when i upgrade the firmware the password in that file will be used.       Now i am trying to modify the file so taht i can insert a password i already know.       The problem is that i don't know how to create the password hash from what i figured the password hash is $1$1/lfbDKX$Hmd.FqzB8IZEohPesYi961       The file is named rom.ko and i found a command telnetd /mnt/yaffs/web/boa -c /mnt/yaffs/web & /bin/cp -f /mnt/yaffs/rom.ko /etc/shadow in a script file so i assume this is the right way.       Can you help me reconstruct a password that i know already? Tell me how or make one for me :) ?... passwd file: root:$1$1/lfbDKX$Hmd.FqzB8IZEohPesYi961:0:0:99999:7:-1:-1:33637592 bin::10897:0:99999:7::: daemon::10897:0:99999:7::: adm::10897:0:99999:7::: lp::10897:0:99999:7::: sync::10897:0:99999:7::: shutdown::10897:0:99999:7::: halt::10897:0:99999:7::: mail::10897:0:99999:7::: news::10897:0:99999:7::: uucp::10897:0:99999:7::: operator::10897:0:99999:7::: games::10897:0:99999:7::: gopher::10897:0:99999:7::: ftp::10897:0:99999:7::: nobody::10897:0:99999:7::: next::11702:0:99999:7:::

    Read the article

  • Minimum permissions needed to create a user Home Folder in Windows Active Directory

    - by Jim
    We would like the Help Desk to have the responsibility of creating User Home folders instead of our 2nd level support. The help desk global group is already an Account Operator, so in Active Directory they are able to edit all User Attributes just fine. The problem is figuring out the minimum level of permissions needed on the File Server to create the home share, with out giving them access to everyone home share. So if they open AD Users and Computer, open the properties for a user, and enter \home\users\%username% in the profile tab and then click OK, they get the following error. The \home\users\username home folder was not created because you do not have create access on the server. The user account has been updated with the new home folder value but you must create the directory manually after obtaining the required access right. Right now I have given the Helpdesk group Full Control on the root folder only (no files or subdirectories) The directory is actually created, but the permissions on the newly created folder only show administrators full control, and no permissions for the configured user account. It sure sounds like I'd have to make the helpdesk local admins on the file servers, which is what I'd like to avoid. Especially since the file servers are a large cluster hosting much much more than the entire orgs home share structure.

    Read the article

  • Green (Screen) Computing

    - by onefloridacoder
    I recently was given an assignment to create a UX where a user could use the up and down arrow keys, as well as the tab and enter keys to move through a Silverlight datagrid that is going be used as part of a high throughput data entry UI. And to be honest, I’ve not trapped key codes since I coded JavaScript a few years ago.  Although the frameworks I’m using made it easy, it wasn’t without some trial and error.    The other thing that bothered me was that the customer tossed this into the use case as they were delivering the use case.  Fine.  I’ll take a whack at anything and beat up myself and beg (I’m not beyond begging for help) the community for help to get something done if I have to. It wasn’t as bad as I thought and I thought I would hopefully save someone a few keystrokes if you wanted to build a green screen for your customer.   Here’s the ValueConverter to handle changing the strings to decimals and then back again.  The value is a nullable valuetype so there are few extra steps to take.  Usually the “ConvertBack()” method doesn’t get addressed but in this case we have two-way binding and the converter needs to ensure that if the user doesn’t enter a value it will remain null when the value is reapplied to the model object’s setter.  1: using System; 2: using System.Windows.Data; 3: using System.Globalization; 4:  5: public class NullableDecimalToStringConverter : IValueConverter 6: { 7: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 8: { 9: if (!(((decimal?)value).HasValue)) 10: { 11: return (decimal?)null; 12: } 13: if (!(value is decimal)) 14: { 15: throw new ArgumentException("The value must be of type decimal"); 16: } 17:  18: NumberFormatInfo nfi = culture.NumberFormat; 19: nfi.NumberDecimalDigits = 4; 20:  21: return ((decimal)value).ToString("N", nfi); 22: } 23:  24: public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 25: { 26: decimal nullableDecimal; 27: decimal.TryParse(value.ToString(), out nullableDecimal); 28:  29: return nullableDecimal == 0 ? null : nullableDecimal.ToString(); 30: } 31: }            The ConvertBack() method uses TryParse to create a value from the incoming string so if the parse fails, we get a null value back, which is what we would expect.  But while I was testing I realized that if the user types something like “2..4” instead of “2.4”, TryParse will fail and still return a null.  The user is getting “puuu-lenty” of eye-candy to ensure they know how many values are affected in this particular view. Here’s the XAML code.   This is the simple part, we just have a DataGrid with one column here that’s bound to the the appropriate ViewModel property with the Converter referenced as well. 1: <data:DataGridTextColumn 2: Header="On-Hand" 3: Binding="{Binding Quantity, 4: Mode=TwoWay, 5: Converter={StaticResource DecimalToStringConverter}}" 6: IsReadOnly="False" /> Nothing too magical here.  Just some XAML to hook things up.   Here’s the code behind that’s handling the DataGridKeyup event.  These are wired to a local/private method but could be converted to something the ViewModel could use, but I just need to get this working for now. 1: // Wire up happens in the constructor 2: this.PicDataGrid.KeyUp += (s, e) => this.HandleKeyUp(e);   1: // DataGrid.BeginEdit fires when DataGrid.KeyUp fires. 2: private void HandleKeyUp(KeyEventArgs args) 3: { 4: if (args.Key == Key.Down || 5: args.Key == Key.Up || 6: args.Key == Key.Tab || 7: args.Key == Key.Enter ) 8: { 9: this.PicDataGrid.BeginEdit(); 10: } 11: }   And that’s it.  The ValueConverter was the biggest problem starting out because I was using an existing converter that didn’t take nullable value types into account.   Once the converter was passing back the appropriate value (null, “#.####”) the grid cell(s) and the model objects started working as I needed them to. HTH.

    Read the article

  • A story of Murphy&ndash;my technical issues at TechDays Switzerland #chtd

    - by Laurent Bugnion
    I had two sessions at the recent Swiss TechDays. While the first one (Advanced Development for Windows Phone 8) went extremely well (I think), I had a very annoying technical issue in the beginning of my second session. First let me add that I talked to Microsoft about that and I hope they will change a few things in the room assignment for next year. My two sessions were one right after the other, with only 15 minutes break to change room. I don’t mind having two sessions so close from each other, but I would really like them to be in the same room in order to avoid having to move my laptops (plural, that will become important later) and redoing the tech check. That being said, I am guilty of not checking where my talks would be before the day before the conference, and when I did notice, it was too late to change it. After my first session, I quickly moved to the other room and setup my main laptop, a Dell Precision. We tested the video output (VGA) and didn’t notice anything special. The projectors are using a fairly high resolution (kudos to the Basel conference center for not having old school 1024x768 projectors anymore, that makes Blend really hard to demo ;) but since everything went great during the first talk, I was not worried. In fact I even had some time to chat with some early attendees about my Microsoft Surface and the Samsung Slate 7, which I had carried with me in addition to the Precision. I just thought it would be nice to show the hardware that Windows 8 can run on, without thinking any further. When the session started, I immediately noticed that the main screen was not showing anything. I thought I had just forgotten to switch to “duplicate” for the video output, and did that with a quick Win-P. However it didn’t “hold”. After 2 seconds, it reverted back to a black display for my attendees. Then I started to really worry. We tried everything, switching from VGA to HDMI, changing the resolution, setting the projector as primary display, but nothing did the trick. The projector was just refusing to show my screen. Now, to show you how despaired I started to be, I even considered using the “extend” setting (which worked just fine), and to use one of the feedback monitors on the floor but really it was super cumbersome. Eventually, my last resort arrived: I started my Samsung Slate 7, which by chance has Visual Studio 12 and Blend 5 installed, plugged the HDMI projector in the dock (yes, I had the dock with me, which I usually don’t!), connected it to internet (had to enter a long password for that), loaded the source code from my main machine using a USB stick and…. finally started to give my presentation. All in all I think we lost about 10 minutes. Amongst the most horrible minutes of my whole life, truly (yes I am blessed, I didn’t have that many horrible minutes in my life ;) I really want to apologize to my attendees. We joked a bit during the attempts to resolve the issue, the reactions I had after the session were all very nice and sympathetic. Only a handful of people left my session while I was having the issues, and I really don’t blame them (who knew how long the problem would last!!). But still, I probably talked at more than 60 sessions over the years, and this was by far my most painful moment. What did I learn? So what did I learn from this? Well from now on I will always have my slate ready with the latest source code, internet connection and every tool I might need during the presentation. This way, if I detect even a hint that the Precision might not work, I will just switch to the Slate. The experience of presenting on the slate is actually not bad at all, it is just a bit slow for my taste, but it does work. By the way, I will be posting the code and slides for my sessions very soon, I just need to “clean it and zip it”. Stay tuned, and thanks again for your patience in that horrible circumstance. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Mac OS X duplex printing problem: one- vs. multi-paged documents

    - by Christian Lindig
    I like to print on pre-printed stationery using the Preview.app and a duplex-capable HP Color Laserjet 4700 (PostScript) printer. The print dialog handles one and two-paged documents differently: the paper needs to be placed differently into the tray if the document contains one page versus when it contains two pages. This is not obvious when printing on plain paper but becomes obvious when front and reverse side of sheets are marked. Otherwise the first page would end up on the reverse side of the first sheet. I believe the problem is caused by the printer driver setting duplex printing to false (using the PostScript setpagedevice operator) when emitting a single-page document versus keeping it set to true when emitting multi-page documents. All this despite that duplex printing is always specified in the printer dialog. When printing a single-sided document, duplex=true and duplex=false seem to make a difference with respect which side of a sheet gets printed on. It would be also helpful if others could confirm the problem actually exists. I suspect this problem is not limited to specific printers. I'm on OS X 10.6 and I checked two different HP printers.

    Read the article

  • RAM ok in memtest86+ == RAM ok after wake from sleep?

    - by twon33
    I have a Windows XP (32-bit) system that appears stable in normal operation, but was repeatably freezing (hard lock, no BSOD) a minute or so after waking from S3 sleep. Some Googling against the motherboard model and memory manufacturer suggested that I might need to bump up the memory voltage, so I tried it and it now seems to resume without freezing. However, I don't really trust it and I'd like to validate that it's actually stable, especially after resuming from sleep. I've run Prime95 for a few hours with no issues, and am planning an overnight run of Memtest86+, which I expect to pass because the system has been solid whenever I've run it without putting it to sleep. Does something like Memtest86+ exist that actually invokes S3 sleep during operation? Clearly it would need an operator to wake the computer to resume testing, but I don't think I've ever heard of a memory test tool that can do this. Alternately, am I wasting my time? Should a clean bill of health from Memtest86+ indicate stability regardless of whether sleep is involved, or, conversely, does my original problem indicate that Memtest86+ would have failed eventually with the stock voltage if I'd run it, sleep or not?

    Read the article

  • What are the disadvantages of self-encapsulation?

    - by Dave Jarvis
    Background Tony Hoare's billion dollar mistake was the invention of null. Subsequently, a lot of code has become riddled with null pointer exceptions (segfaults) when software developers try to use (dereference) uninitialized variables. In 1989, Wirfs-Brock and Wikerson wrote: Direct references to variables severely limit the ability of programmers to re?ne existing classes. The programming conventions described here structure the use of variables to promote reusable designs. We encourage users of all object-oriented languages to follow these conventions. Additionally, we strongly urge designers of object-oriented languages to consider the effects of unrestricted variable references on reusability. Problem A lot of software, especially in Java, but likely in C# and C++, often uses the following pattern: public class SomeClass { private String someAttribute; public SomeClass() { this.someAttribute = "Some Value"; } public void someMethod() { if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { return this.someAttribute; } } Sometimes a band-aid solution is used by checking for null throughout the code base: public void someMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void anotherMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Default Value" ) ) { // do something... } } The band-aid does not always avoid the null pointer problem: a race condition exists. The race condition is mitigated using: public void anotherMethod() { String someAttribute = this.someAttribute; assert someAttribute != null; if( someAttribute.equals( "Some Default Value" ) ) { // do something... } } Yet that requires two statements (assignment to local copy and check for null) every time a class-scoped variable is used to ensure it is valid. Self-Encapsulation Ken Auer's Reusability Through Self-Encapsulation (Pattern Languages of Program Design, Addison Wesley, New York, pp. 505-516, 1994) advocated self-encapsulation combined with lazy initialization. The result, in Java, would resemble: public class SomeClass { private String someAttribute; public SomeClass() { setAttribute( "Some Value" ); } public void someMethod() { if( getAttribute().equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { String someAttribute = this.someAttribute; if( someAttribute == null ) { setAttribute( createDefaultValue() ); } return someAttribute; } protected String createDefaultValue() { return "Some Default Value"; } } All duplicate checks for null are superfluous: getAttribute() ensures the value is never null at a single location within the containing class. Efficiency arguments should be fairly moot -- modern compilers and virtual machines can inline the code when possible. As long as variables are never referenced directly, this also allows for proper application of the Open-Closed Principle. Question What are the disadvantages of self-encapsulation, if any? (Ideally, I would like to see references to studies that contrast the robustness of similarly complex systems that use and don't use self-encapsulation, as this strikes me as a fairly straightforward testable hypothesis.)

    Read the article

  • Fun with Python

    - by dotneteer
    I am taking a class on Coursera recently. My formal education is in physics. Although I have been working as a developer for over 18 years and have learnt a lot of programming on the job, I still would like to gain some systematic knowledge in computer science. Coursera courses taught by Standard professors provided me a wonderful chance. The three languages recommended for assignments are Java, C and Python. I am fluent in Java and have done some projects using C++/MFC/ATL in the past, but I would like to try something different this time. I first started with pure C. Soon I discover that I have to write a lot of code outside the question that I try to solve because the very limited C standard library. For example, to read a list of values from a file, I have to read characters by characters until I hit a delimiter. If I need a list that can grow, I have to create a data structure myself, something that I have taking for granted in .Net or Java. Out of frustration, I switched to Python. I was pleasantly surprised to find that Python is very easy to learn. The tutorial on the official Python site has the exactly the right pace for me, someone with experience in another programming. After a couple of hours on the tutorial and a few more minutes of toying with IDEL, I was in business. I like the “battery supplied” philosophy that gives everything that I need out of box. For someone from C# or Java background, curly braces are replaced by colon(:) and tab spaces. Although I tend to miss colon from time to time, I found that the idea of tab space is actually very nice once I get use to them. I also like to feature of multiple assignment and multiple return parameters. When I need to return a by-product, I just add it to the list of returns. When would use Python? I would use Python if I need to computer anything quick. The language is very easy to use. Python has a good collection of libraries (packages). The REPL of the interpreter allows me test ideas quickly before committing them into script. Lots of computer science work have been ported from Lisp to Python. Some universities are even teaching SICP in Python. When wouldn’t I use Python? I mostly would not use it in a managed environment, such as Ironpython or Jython. Both .Net and Java already have a rich library so one has to make a choice which library to use. If we use the managed runtime library, the code will tie to the particular runtime and thus not portable. If we use the Python library, then we will face the relatively long start-up time. For this reason, I would not recommend to use Ironpython for WP7 development. The only situation that I see merit with managed Python is in a server application where I can preload Python so that the start-up time is not a concern. Using Python as a managed glue language is an over-kill most of the time. A managed Scheme could be a better glue language as it is small enough to start-up very fast.

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • Starting over and new to Ubuntu

    - by 2funnyyone
    We have been having repeated problems with our interent service and using windows xp & sp3 (users and premissions) I see no need for them. I started with computers long before windows. Every since sp 3 come out in 2009 I have had nothing but problems. I have lost so many computers to virius and trojans, we just stack them up. We are with Qwest/ Century link which is using advertising servers which I think is causing the problem. All the computers are networked together which is not how I set them up. I beleive Century link is networking them through assignment of a domain for our home. This causes all the computers to crash twice. This is getting expensive. We tried buying new harddrives but reinfect with hours of connecting to internet. I also beleive the modem, router and all computers are infected. I put combofix on this one and that is the only reason we are still online with this laptop. I am afraid to install new equipment because my partner and I are on SSDI and this cost a lot. I go to school at UOP and had to run off a flash and reboot this laptop to recovery every other day or so, this pass month. New plan is: We are getting ready to install new equipment but afraid to reinfect again. Need help to install new equipment. The plan is to use current internet services from Qwest/ now Century Link. The list of New equipment in order: Century link wireless modem is ZyXEL PK5000Z with 4 direct connect Ethernet ports Next Dell Optiplex 210L ( used auction purchase ) 2 gb ram 80 g hard drive Ubuntu 11.10 operating system Next Wireless D-Link router WBR-1310 with 4 direct connect Ethernet ports OK-------- Purchased Dell OEM disk for Repair or Reinstalling Windows XP Professional Operating system (2 roommates as well) All infected computers are Dell desktops or laptops with XP Pro Also purchasing Ubuntu 12.04 for 3 computers. We like the way it runs but still learning it. Questions 1] How do we fdisk the infected computers without infecting new system. We have Dos disks, but none have floppy dish drive. We do have a new floppy disk drive and usb adapter we purchased from Amazon. 2] We are thinking Avast internet security because of the boot scan. We want all software loaded before reconnecting. We can manually load our internet provider information. We purchased StopZilla $100 for 5 computers, but not sure that is what we need. But need how to setup ports security and services we will need. Really lost at this part. So we are safe when we go back on the internet. 3] Want to connect reloaded fdisk systems to router as public connection and no sharing. Do not want to network all computers. 4] Want parental/ ownership control from Ubuntu system for internet connection (Children and friends). Do we restrict at the modem and/ or router? Any help would be a blessing. I do not want to go alone on this anymore.

    Read the article

  • htaccess: multiple redirections depending on domain name

    - by Marcin Kmiec
    I have a server and a few domains and two webpages. Can't figure out how to do the following: A.com -> root\ www.A.com -> root\ B.com -> root\ www.B.com -> root\ C.com -> root\folder1\ www.C.com -> root\folder1\ By the way. What is the 'and' logical operator used in htaccess? I found that 'or' is [OR] but [AND] doesn't seem to work. And what is the language htaccess is written in:)? UPDATE I made a mistake in the question though. Here's what I'd really want to do. DNS is set for the domain A.com to point to the root folder of the server. Now I would like to set the following redirections: Any domain other than C.com and other than D.com redirects (301) to www.A.com. A.com points to the root folder of the server anyway and that is set in DNS. Domain www.C.com points to the folder 'folder1' on the server. Can it be set in htaccess? Now domains C.com, www.D.com and D.com redirects to www.C.com.

    Read the article

  • Apache Mod SVN Access Forbidden

    - by Cerin
    How do you resolve the error svn: access to '/repos/!svn/vcc/default' forbidden? I recently upgraded a Fedora 13 server to 16, and now I'm trying to debug an access error with a Subversion server running on using Apache with mod_dav_svn. Running: svn ls http://myserver/repos/myproject/trunk Lists the correct files. But when I go to commit, I get the error: svn: access to '/repos/!svn/vcc/default' forbidden My Apache virtualhost for svn is: <VirtualHost *:80> ServerName svn.mydomain.com ServerAlias svn DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Location /repos> Order allow,deny Allow from all DAV svn SVNPath /var/svn/repos SVNAutoversioning On # Authenticate with Kerberos AuthType Kerberos AuthName "Subversion Repository" KrbAuthRealms mydomain.com Krb5KeyTab /etc/httpd/conf/krb5.HTTP.keytab # Get people from LDAP AuthLDAPUrl ldap://ldap.mydomain.com/ou=people,dc=mydomain,dc=corp?uid # For any operations other than these, require an authenticated user. <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location> </VirtualHost> What's causing this error? EDIT: In my /var/log/httpd/error_log I'm seeing a lot of these: [Fri Jun 22 13:22:51 2012] [error] [client 10.157.10.144] ModSecurity: Warning. Operator LT matched 20 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/base_rules/modsecurity_crs_60_correlation.conf"] [line "31"] [msg "Inbound Anomaly Score (Total Inbound Score: 15, SQLi=, XSS=): Method is not allowed by policy"] [hostname "svn.mydomain.com"] [uri "/repos/!svn/act/0510a2b7-9bbe-4f8c-b928-406f6ac38ff2"] [unique_id "T@Sp638DCAEBBCyGfioAAABK"] I'm not entirely sure how to read this, but I'm interpreting "Method is not allowed by policy" as meaning that there's some security Apache module that might be blocking access. How do I change this?

    Read the article

  • Mac Mavericks, ngircd localhost works, private IP doesn't

    - by user221945
    I have configured ngircd to listen on my private ip address. It doesn't. Localhost works fine. Configuration test: ngIRCd 21-IDENT+IPv6+IRCPLUS+SSL+SYSLOG+TCPWRAP+ZLIB-x86_64/apple/darwin13.2.0 Copyright (c)2001-2013 Alexander Barton () and Contributors. Homepage: http://ngircd.barton.de/ This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Reading configuration from "/opt/local/etc/ngircd.conf" ... OK, press enter to see a dump of your server configuration ... [GLOBAL] Name = irc.bellbookandpistol.com AdminInfo1 = Jaedreth AdminInfo2 = San Diego County CA, US AdminEMail = [email protected] HelpFile = /opt/local/share/doc/ngircd/Commands.txt Info = Server Info Text Listen = 10.0.1.5,127.0.0.1 MotdFile = MotdPhrase = "Welcome to irc.bellbookandpistol.com" Password = PidFile = Ports = 6667 ServerGID = wheel ServerUID = root [LIMITS] ConnectRetry = 60 IdleTimeout = 0 MaxConnections = 0 MaxConnectionsIP = 6 MaxJoins = -1 MaxNickLength = 9 MaxListSize = 0 PingTimeout = 120 PongTimeout = 20 [OPTIONS] AllowedChannelTypes = #&+ AllowRemoteOper = no ChrootDir = CloakHost = CloakHostModeX = CloakHostSalt = kBih5mu\kVI!DC6eifT(hd4m/0'zb/=: CloakUserToNick = no ConnectIPv4 = yes ConnectIPv6 = no DefaultUserModes = DNS = yes IncludeDir = /opt/local/etc/ngircd.conf.d MorePrivacy = no NoticeAuth = no OperCanUseMode = no OperChanPAutoOp = yes OperServerMode = no RequireAuthPing = no ScrubCTCP = no SyslogFacility = local5 WebircPassword = [SSL] CertFile = CipherList = HIGH:!aNULL:@STRENGTH DHFile = KeyFile = KeyFilePassword = Ports = [OPERATOR] Name = [REDACTED] Password = [REDACTED] Mask = [CHANNEL] Name = #BBP Modes = tnk Key = MaxUsers = 0 Topic = Welcome to the Bell, Book and Pistol IRC Server! KeyFile = As you can see, it should be listening on 10.0.1.5, but it isn't. After turning on Apache manually, port 80 works on 10.0.1.5, but port 6667 doesn't. It only works on localhost. Is there some terminal command I could use or some config file I could edit to get this to work?

    Read the article

  • How can I store logs and meet compliance requirements for free?

    - by Martin
    I am trying to keep long-term logs of an app in such a way, that it could plausibly demonstrated to third parties/court that the application has processed certain data at a given time. The data can be represented in XML or text format. A simple gzipped log is not plausible, as I may have added or modified data afterwards, whereas an external logging service would be an overkill. Cost is an issue, we are not dealing with financial data or so, but rather some simple user generated content, where some malicious users tried to blame the operator in the past when things escalated and went to court. My question: Is there some kind of signing software for Linux that signs each element of a log in such a way, that it can be easily shown that no element can be added or modified afterwards? Plug-Ins into some free Splunk Alternatives would be fine too. Ideally the software I am looking for should be under a GPL or similar license. I could probably achive something like this by using PGP/GPG sgning functions and including the previous elements signituares within the following element, but I would prefer to use some program where you do not have to argue about the validity of your own code. Note to mods: I am not asking this question on Stackoverflow, because I am not looking for writing own code for reasons described above. I think this question rather fits into serverfault than superuser, as server-side logging software is discussed rather here than on superuser.

    Read the article

  • Powershell script to delete secondary SMTP addresses of Exchange 2010 Mail Contacts

    - by Zero Subnet
    I have a few thousand Exchange 2010 Mail Contacts who get erroneously assigned internal SMTP addresses by the default recipient policy. I'm trying to use the following command to delete these addresses (keeping the primary SMTP) and disabling the automatic update from recipient policy so the SMTP addresses don't get recreated again. Get-MailContact -OrganizationalUnit "domain.local/OU" -Filter {EmailAddresses -like *@domain.local -and name -notlike "ExchangeUM*"} -ResultSize unlimited -IgnoreDefaultScope | foreach {$contact = $_; $email = $contact.emailaddresses; $email | foreach {if ($_.smtpaddress -like *@domain.local) {$address = $_.smtpaddress; write-host "Removing address" $address "from Contact" $contact.name; Set-Mailcontact -Identity $contact.identity -EmailAddresses @{Remove=$address}; $contact | set-mailcontact -emailaddresspolicyenabled $false} }} I'm getting the following error though: You must provide a value expression on the right-hand side of the '-like' operator. At line:1 char:312 + Get-MailContact -OrganizationalUnit "domain.local/testou" -Filter {EmailAddresses -like "@domain.local" -and name -notlike "ExchangeUM"} -ResultSize unlimited -IgnoreDefaultScope | foreach {$contact = $; $ email = $contact.emailaddresses; $email | foreach {if ($.smtpaddress -like <<<< *@domain.local) {$address = $_.smt paddress; write-host "Removing address" $address "from Contact" $contact.name; Set-Mailcontact -Identity $contact.ident ity -EmailAddresses @{Remove=$address}; $contact }} + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : ExpectedValueExpression Any help as to how to fix this?

    Read the article

  • Apache displays error page half way through PHP page execution

    - by Shep
    I've just installed Zend Server Community Edition on a Windows Server 2003 box, however there's a bit of a problem with the display of a lot of our PHP pages. The code has previously running under the same version of PHP (5.3) on IIS without any issues. By the looks of things, Apache (installed as part of Zend Server) is erroring out during the rendering of the page when it comes across something it doesn't like in the PHP. Going through the code, I've been able to get past some of the problems by removing the error suppression operator (@) from function calls and by changing the format of some includes. However, I can't do this for the whole site! Weirdly, the error code is reported as "200 OK". The code snippet below shows how the Apache error HTML interrupts the regular HTML of the page. <p>Ma quande lingues coalesce, li grammatica del resultant lingue es plu simplic e regulari quam ti del coalescent lingues.</<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>200 OK</title> </head><body> <h1>OK</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error.</p> <p>More information about this error may be available in the server error log The Apache error log doesn't offer any explanation for this, and I've exhausted my Googling skills, so any help would be greatly appreciated. Thanks.

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • Is it better to always copy and delete, rather than move?

    - by nbolton
    Generally speaking, I find myself panicking when I realise that if I cancel a file move, it could cause the target or source to be incomplete. This question applies to Windows and Unix-based platforms. I can never remember exactly how the move command works in either case. For example, if you're moving a directory; does it copy the entire directory, then delete it after, or does it copy then delete each file individually? I always realise after typing something like, mv verybigdir dest that I really should have typed cp -R verybigdir dest && rm verybigdir (where the && operator only moves to the next command if the first was successful) -- or is this pointless? What happens exactly when I press Ctrl+C half way through a move? Likewise, what exactly happens on Windows when I press the cancel button? I can't count the number of times I've moved something (the last time was when using svn) and had two directories, with split contents. I guess the answer is difficult, because not all applications move groups of files in the same way.

    Read the article

  • How can I work efficiently on a desktop sharing workflow?

    - by OSdave
    I am a freelance Magento developer, based in Spain. One of my clients is a Germany based web development company and they're asking me something I think it's impossible. OK, maybe not impossible but definitely not a preferred way of doing things. One of their clients has a Magento Entreprise installation, which is the paid (and I think proprietary) version of Magento. Their client has forbidden them to download the files from his server. My client is asking me now to study one particular module of the application in order to interact with it from a custom module I'll have to develop. As they have a read-only ssh access to their client's server, they came up with this solution: Set up a desktop/screen sharing session between one of their developer's station and mine, alongsides with a skype conversation. Their idea is that I'll say to the developer: show me file foo.php The developer will then open this foo.php file in his IDE. I'll have then to ask him to show me the bar method, the parent class, etc... Remember that it's a read-only session, so forget about putting a Zend_Debug::log() anywhere, and don't even think about a xDebug breakpoint (they don't use any kind of debugger, sic). Their client has also forbidden them to use any version control system... My first reaction when they explained to me this was (and I actually did say it outloud to them): Well, find another client. but they took it as a joke from me. I understand that in a business point of view rejecting a client is not a good practice, but I think that the condition of this assignment make it impossible to complete. At least according to my workflow. I mean, the way I work or learn a new framework/program is: download all files and copy of db on my pc create a git repository and a branch run the application locally use breakpoints use Zend_Debug::log() write the code and tests commit to git repo upload to (test/staging first if there is one, production if not) server I have agreed to try the desktop sharing session, although I think it will be a waste of time. On one hand I don't mind, they pay me for that time, but I know me and I don't like the sensation of loosing my time. On the other hand, I have other clients for whom I can work according to my workflow. I am about to say to them that I cannot (don't want to) do it. Well, I'll first try this desktop sharing session: maybe I'm wrong and it can actually work. But I like to consider myself as a professional, and I know that I don't know everything. So I try to keep an open mind and I am always willing to learn new stuff. So my questions are: Can this desktop-sharing workflow work? What should be done in order to take the most of it? Taking into account all the obstacles (geographic locations, no local, no git), is there another way for me to work on that project?

    Read the article

  • Can Haproxy deny a request by IP if its stick-table is full?

    - by bantic
    In my haproxy configs I'm setting a stick-table of size 5 that stores every incoming IP address (for 1 minute), and it is set as nopurge so new entries won't get stored in the table. What I'd like to have happen is that they would get denied, but that isn't happening. The stick-table line is: stick-table type ip size 5 expire 1m nopurge store gpc0 And the whole configs are: global maxconn 30000 ulimit-n 65536 log 127.0.0.1 local0 log 127.0.0.1 local1 debug stats socket /var/run/haproxy.stat mode 600 level operator defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms backend fragile_backend tcp-request content track-sc2 src stick-table type ip size 5 expire 1m nopurge store gpc0 server fragile_backend1 A.B.C.D:80 frontend http_proxy bind *:80 mode http option forwardfor default_backend fragile_backend I have confirmed (connecting to haproxy's stats using socat readline /var/run/haproxy.stat) that the stick-table fills up with 5 IP addresses, but then every request after that from a new IP just goes straight through -- it isn't added to the stick-table, nothing is removed from the stick-table, and the request is not denied. What I'd like to do is deny the request if the stick-table is full. Is this possible? I'm using haproxy 1.5.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Multidimensional multiple-choice knapsack problem: find a feasible solution

    - by Onheiron
    My assignment is to use local search heuristics to solve the Multidimensional multiple-choice knapsack problem, but to do so I first need to find a feasible solution to start with. Here is an example problem with what I tried so far. Problem R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 11.0 3 2 2 12.0 1 1 3 G2: 20.0 1 1 3 5.0 2 3 2 G3: 10.0 2 2 3 30.0 1 1 3 Sorting strategies To find a starting feasible solution for my local search I decided to ignore maximization of gains and just try to fit the resources requirements. I decided to sort the choices (strategies) in each group by comparing their "distance" from the multidimensional space origin, thus calculating SQRT(R1^2 + R2^2 + ... + RN^2). I felt like this was a keen solution as it somehow privileged those choices with resouce usages closer to each other (e.g. R1:2 R2:2 R3:2 < R1:1 R2:2 R3:3) even if the total sum is the same. Doing so and selecting the best choice from each group proved sufficent to find a feasible solution for many[30] different benchmark problems, but of course I knew it was just luck. So I came up with the problem presented above which sorts like this: R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 12.0 1 1 3 < select this 11.0 3 2 2 G2: 20.0 1 1 3 < select this 5.0 2 3 2 G3: 30.0 1 1 3 < select this 10.0 2 2 3 And it is not feasible because the resources consmption is R1:3, R2:3, R3:9. The easy solution is to pick one of the second best choices in group 1 or 2, so I'll need some kind of iteration (local search[?]) to find the starting feasible solution for my local search solution. Here are the options I came up with Option 1: iterate choices I tried to find a way to iterate all the choices with a specific order, something like G1 G2 G3 1 1 1 2 1 1 1 2 1 1 1 2 2 2 1 ... believeng that feasible solutions won't be that far away from the unfeasible one I start with and thus the number of iterations will keep quite low. Does this make any sense? If yes, how can I iterate the choices (grouped combinations) of each group keeping "as near as possibile" to the previous iteration? Option 2: Change the comparation term I tried to think how to find a better variable to sort the choices on. I thought at a measure of how "precious" a resource is based on supply and demand, so that an higer demand of a more precious resource will push you down the list, but this didn't help at all. Also I thought there probably isn't gonna be such a comparsion variable which assures me a feasible solution at first strike. I there such a variable? If not, is there a better sorting criteria anyways? Option 3: implement any known sub-optimal fast solving algorithm Unfortunately I could not find any of such algorithms online. Any suggestion?

    Read the article

  • Does Google sometimes ignore "special" characters, possibly depending on your location or font type settings? [closed]

    - by RLH
    TLDR Google tends to ignore special characters in my search strings. Is there anything that I can do about it and is it, possibly, happening because Google makes certain assumptions based off of my default text-encoding settings and my location? I just posted this question over at StackOverflow. I had found a C preprocessor that I'd never seen before. As I should have done, I Googled it and tried to find out further information. I attempted various search terms which were all variations of "C Operator ##" (some times with and some times without the double-quotes.) Google didn't bring back anything of use so I posted my question on SO. As you can see from the comments, someone mentioned a search string (ironically one which I did try to search) and stated that I could have even hit the "I'm feeling lucky" button and have gotten my answer. The problem is I did search that, and the results that I received were far more basic and even after following the top results and searching the resulting pages, I could find nothing referencing the string "##". I'm not posting this question to complain but it does provide an empirical example of something I've seen before that really bugs me-- Google often ignores special characters in my search strings and the results are often useless. As a developer I often need to search for string values containing non-alphanumeric characters. Some characters (like the underscore or hyphen) can be used without trouble. However, other characters (such as the ampersand, carat, tilde and pound sign) are often ignored in my query strings. Is there a way to prevent this from happening so that I can get meaningful results from Google? NOTE I stay logged into Google and I live in the US. I wonder if Google detects some form of text-encoding setting or derives my results based off of certain, localized text-based assumptions. Regardless, I would like to for Google to search for what I give it. Is there anything that I can do to improve my results?

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >