Search Results

Search found 1247 results on 50 pages for 'attention'.

Page 19/50 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Finding proof of server being compromised by Black Hole Toolkit exploit

    - by cosmicsafari
    I recently took over maintenance of a company server. (Just Host, C Panel, Linux server), theres a tonne of websites on it which i know nothing about. It had came to my attention that a client had attempted to access one of the websites hosted on this server and was met with a warning from windows defender. It had blocked access because it said the website had been compromised by the Black Hole Toolkit or something to that effect. Anyway I went in and updated various plugins and deleted some old suspect websites. I have since ran the website in question through a few online malware scanners and its comes up clean everytime. However im not convinced. Do any of you guys know extensive ways i can check that the server isn't still compromised. I have no way to install any malware scanners or anti virus programs on the server as it is horribly locked down by Just Host.

    Read the article

  • How do I stop track changes from turning on automatically in Word 2007

    - by Benj
    Whenever I open an existing document in Word 2007 (on Windows XP), word turns on track changes, and changes the display mode to "Final" (that is, not "Final Showing Markup" -- so I often don't even notice track changes is on if I don't remember to pay attention. This happens for ALL existing documents, and doesn't happen for new documents. I can't find any option in the configuration that would control this behavior. I would like to restore the original/default behavior where documents are opening with Track Changes off, and in "Final showing markup" display. Steps to Reproduce Open Word 2007. Create a new document. Verify that track changes is off. Save the document and close Word. Open the document (either directly or through Word). Track changes is now on. Any ideas?

    Read the article

  • How can I enforce directory space limits in a OpenVPZ system?

    - by George
    The title says it all. I have some programs on a server (centos4-openvz) that use a directory as temp directory - but pay no attention to the size it grows. I want to enforcee a limit, like this folder cannot exceed 300mb. I would use quota but OpenVZ does not support loop devices to mount a file as such. Any other solutions? (apart from scripting a periodic delete of files in the directory). Editing the application's code to implement such a functionality is not entirely out of the question - if it can be done easily and no other ways exist.. Its written in cpp - but I don't know how to implement it.

    Read the article

  • Graphic Setup tune-up checklist

    - by Click Ok
    I was trying to play the game Warzone 2100 and the games runs fine, with nice speed, but the screen stays with a horizontal lines "flickering"... My PC have a integrated GeForce Go 6100 vga. Ok, not a powerfull vga, but it's not the end of the world to run a "simple" game like this (compared with another games that ask you send your eyes to purchase a expensive vga). So, I think that the problem can be of the configuration of my machine. I use it in first instance for programming jobs, so I underpay attention to video setup. I would like about a checklist to know if my PC is "ready" to games. By example, I know that I need: Lastest vga drivers Updated DirectX and OpenGL What you suggest? There is too some good programs to test performance and suggests improvements in the system? Thank you! PS: I'm using Windows 7

    Read the article

  • Dump nginx config from running process?

    - by Sergio Tulentsev
    Apparently, I shouldn't have spent sleepless night trying to debug an application. I wanted to restart my nginx and discovered that its config file is empty. I don't remember truncating it, but fat fingers and reduced attention probably played their part. I don't have backup of that config file. I know I should have made it. Good for me, current nginx daemon is still running. Is there a way to dump its configuration to a config file that it'll understand later?

    Read the article

  • I bought a domain name at GoDaddy and hosting at Dreamhost but the first doesn't work!

    - by janooChen
    I added the Dreamhost's nameservers like 12 hours ago to: I entered to the following panel: Nameservers -> Set Nameservers (I have specific nameservers for my domains) and added Dreamhost's nameservers liek this: Nameserver 1: NS1.DREAMHOST.COM Nameserver 2: NS2.DREAMHOST.COM Nameserver 3: NS3.DREAMHOST.COM So now in the admin panel I see this: Nameservers Nameservers: (Last Update 2/10/2011) NS1.DREAMHOST.COM NS2.DREAMHOST.COM NS3.DREAMHOST.COM But I get this when I run the analysis tools: Attention Required! There are critical issues Accessing Your Web Site Accessing Your Web Site Properly configuring your domain name and hosting account ensures that visitors can access your site. Did I do something wrong or I have to wait 24 to 48 hours? (Dreamhost does display my page because I can access the other domain name I bought together with the hosting) (By the way, if everyone uses the same nameserver, how will go GoDaddy know which is the hosting space that I purchased among all others)? Thanks in advance.

    Read the article

  • Ubuntu doesn't start and I can't login [migrated]

    - by Meph00
    My ubuntu 13.04 doesn't boot anymore. Eternal black screen. If I press ALT+CTRL+F1 I see that it's stucked on "Checking battery state [OK]." I'd like to try to go with sudo apt-get install gdm but I can't login on terminal tty2, tty3 etc. They correctly ask for my nickname, then they make me wait a lot, ask for password and make me wait again. After a lot of time (... a lot) the best I could achieve was visualizing "Documentantion https://help.ubuntu.com". I can never reach the point where I can give commands. Plus, during the long pauses, every 2 minutes it gives a messagge like this: INFO: task XXX blocked for more than 120 seconds. Any suggestion? Sorry for my bad english and thanks everyone for the attention.

    Read the article

  • Why obfuscating a serial number of a device? What is the risk?

    - by Horst Walter
    In one of my xx.stackexchange questions I've got an answer, in which the user has obfuscated his disk's SN (serial number). Recently I have seen this in several photos as well, the SN was blurred out. I' am just curious, because I have never paid attention to this. What could be the potential risk in publishing a device's SN? I do see some sense when it comes to a MAC address, OK, this could be used for tracking. But a SN of a disk, iPad, whatsoever? Maybe there is an important reason for not publishing it, which I haven't seen so far.

    Read the article

  • Tool to launch a script driven by modem activity

    - by Will M
    Can anyone suggest a software tool (preferably under Windows XP or later) that would launch an application or script in response to a phone call being received on a landline phone line connected to a data modem on the same PC? or, better, in response to a sequence of touch-tones being played over such a phone line. This would allow, for example, using the telephone to manipulate firewall settings so as to create another layer of security in connection with remote internet access to that computer. I seem to recall seeing tools to do this sort of thing in the days before broadband internet access, when there was more attention to various tips and tricks for the dial-up modem, but a few attempts at Google hasn't turned anything up.

    Read the article

  • SQL Server: How to report error from a job's step of PowerShell type

    - by Ariel
    I have a SQL job whose step of 'PowerShell' type does 'exit 1' if encounters an error i.e. $ErrorActionPreference='stop'; trap{"$_"; exit 1} Problem is, SQL Server doesn't pay attention to that exit code, and reports "The step did not generate any output. Process Exit Code 0. The step succeeded." Any idea how to successfully tell SQL Server from a PowerShell step that something went wrong? Thanks in advance.

    Read the article

  • Stop a Windows XP taskbar item from blinking forever

    - by EMP
    In Windows XP when an application that doesn't currently have the focus wants to attract the user's attention its Taskbar item blinks. Often it blinks 3 times and then stops, which is fine. However, sometimes it just keeps blinking forever. An example of that is Firefox with a new JavaScript confirmation dialog. This is really annoying if I don't want to switch to that application just now - I basically cannot focus on anything else because of this stupid blinking thing distracting me! How do I force all apps to blink only 3 times (or X times) and then stop?

    Read the article

  • Server stop responding, where to look to know what happened?

    - by Cid
    I have a server that has been running for well over 5 months and suddently it stop responding. I couldn't ssh into it or anything else so I decided to reboot it and the reboot fixed it. I'm trying to figure out what happened and I'm not sure exactly where to look. I started to look in /var/log but there are tons of files in there and I'm not sure which one I should pay attention to. I'm slowly going through each one of them but if anyone can point me in the right direction, it would be great. Thanks!

    Read the article

  • Automatic highlighting of text in thunderbird

    - by m000
    In days with large email influx, some emails tend to slip out of my attention. Usually these are email where I am not the primary recipient, but I am referenced at some point as having to perform some action. This point may occur after several responses to a thread. E.g.: Hello B, Hello A, Dear all, Could someone give me feedback on document X? Thanks, A. Please find my remarks below: ... Best, B. Thank you for the feedback. I have incorporated it in the document. C, can you make a final proof-reading pass and post it online? Thanks, A. Is there a way in thunderbird to highlight the sentence/paragraph where my name ("C" in this example) or some other configurable keyword occurs, so that I can easily check if I have to take action? Note that I am looking for a way to identify the part of the message that directly concerns me and not which messages concern me. So, filters/tags won't really help.

    Read the article

  • Is a low voltage licence needed to install network cable?

    - by BCS
    In some places in the US you need a "low voltage licence" to install "low voltage wiring" but I can't seem to find anything that says if this includes network wiring. (I know this will differ from place to place so please state where you are referring to.) Does the law say you need a licence? Does anyone pay attention to it? Are their any exception you known of? How sure are you of your information? Got any links? "Me specific" parts of the question. p.s. I'm looking for info on Idaho, USA but answerers for anywhere welcome. edit: I have some formal training in this and am completely confident that I can do it safely and correctly. However the training was pre-2008 befor Idaho switched to the national building code.

    Read the article

  • If email not received then do X (outlook 2013 on Exchange 2010)

    - by Brad
    I receive notification emails daily and would like to automate an easier way to manage all of those notifications. For example: Notification 1 from [email protected] is received daily between 10pm-1am Notification 2 from [email protected] is received daily between 12am-3am Notification 3 from [email protected] is received daily between 1am-4am I am looking for a way to page myself at [email protected] on my cellphone if any of these messages are not received within the defined time frame of when the email should have arrived. I would like to basically email a page like: ATTENTION Notification 2 not received within the allowed range. This way I would be notified instead of having to check the email manually and see that I only received 2 of the three alerts. Is there a way to do this in Outlook? Our exchange server is a hosted exchange server on GoDaddy if that info is needed.

    Read the article

  • Domain Hosting, DNS/NS Management [closed]

    - by ledy
    What criteria do you have for choosing the right provider for registering a domain? (not hosting, only the domain) So far, I can only see the different pricing and features of the DNS manager but have no idea what I have to pay attention in addition to this. How to know about the quality of the provider's nameserver and could there be differences in speed concerns. Background: The hosting itself incl. webspace, server etc. is separate from the domain and I am not going to book the domain with the same hoster as the webspace/server hoster.

    Read the article

  • Internet connection resets (wifi), no device has wifi for moment

    - by Gizmo
    I have a few devices setup and I noticed this happens with both my old and new router (two different vendors&models) This happens occasionally, and I tried wireshark today and this cought my attention: No. Time Source Destination Protocol Length Info 69 6.464423000 LiteonTe_7f:64:30 Broadcast ARP 42 Who has 192.168.2.4? Tell 0.0.0.0 73 7.451146000 LiteonTe_59:65:94 Arcadyan_e1:8c:cf ARP 42 Who has 192.168.2.254? Tell 192.168.2.1 85 9.450443000 LiteonTe_59:65:94 Arcadyan_e1:8c:cf ARP 42 Who has 192.168.2.254? Tell 192.168.2.1 91 10.521000000 LiteonTe_59:65:94 Broadcast ARP 42 Who has 192.168.2.254? Tell 192.168.2.1 93 10.860953000 LiteonTe_7f:64:30 Broadcast ARP 42 Who has 192.168.2.254? Tell 192.168.2.4 97 11.099463000 LiteonTe_7f:64:30 Broadcast ARP 42 Who has 192.168.2.254? Tell 192.168.2.4 where 192.168.2.254 is the gateway/router, 192.168.2.1 my laptop ip and 192.168.2.4 my sisters laptop.. what's happening here?

    Read the article

  • How can I exclude a file in a folder from basic auth (regex help)?

    - by simon180
    Hi I have a folder on my site which contains admin files and I've added basic auth following a little unwanted attention. This works fine however a couple of the admin functions won't work through basic auth as they handle file uploads and so I want to exclude these files from the auth. It shouldn't have any security implications as any rogue user wouldn't be able to access the pages that could create a session to use these functions. I am using the following basic code to exclude a file: <FilesMatch "(index.php\/myadminfolder\/myurl\/myaction/someotherstuff?)$"> Satisfy Any Order allow,deny Allow from all Deny from none </FilesMatch> The URL exclusion is not working. The URL to exclude is in the form: index.php/directory/subdirectory/action/uniqueid/blah What is the correct URL string to add to FilesMatch to exclude any files that start with the pattern of index.php/directory/subdirectory/action - regardless of what comes after action? Thanks Simon

    Read the article

  • Is the community MySQL safe for production use?

    - by n_kips
    Or Will I need to get the enterprise version? This is because I found this on MySQL's site: If you are running a MySQL production level system, we would like to direct your attention to the product description of MySQL Enterprise Edition at: http://mysql.com/products/enterprise/ When I check the features, it seems like the community edition does not support transactions, while the enterprise version does. If it is true that the community edition is not right for production, then it seems like posgresql may be my way out, for it supports transactions and it is fully opensource. Will the sql syntax need to change (much) if I have to change? Thank you.

    Read the article

  • Linked Lists in Java - Help with assignment

    - by doron2010
    I have been trying to solve this assignment all day, please help me. I'm completely lost. Representation of a string in linked lists In every intersection in the list there will be 3 fields : The letter itself. The number of times it appears consecutively. A pointer to the next intersection in the list. The following class CharNode represents a intersection in the list : public class CharNode { private char _data; private int _value; private charNode _next; public CharNode (char c, int val, charNode n) { _data = c; _value = val; _next = n; } public charNode getNext() { return _next; } public void setNext (charNode node) { _next = node; } public int getValue() { return _value; } public void setValue (int v) { value = v; } public char getData() { return _data; } public void setData (char c) { _data = c; } } The class StringList represents the whole list : public class StringList { private charNode _head; public StringList() { _head = null; } public StringList (CharNode node) { _head = node; } } Add methods to the class StringList according to the details : (Pay attention, these are methods from the class String and we want to fulfill them by the representation of a string by a list as explained above) public char charAt (int i) - returns the char in the place i in the string. Assume that the value of i is in the right range. public StringList concat (String str) - returns a string that consists of the string that it is operated on and in its end the string "str" is concatenated. public int indexOf (int ch) - returns the index in the string it is operated on of the first appeareance of the char "ch". If the char "ch" doesn't appear in the string, returns -1. If the value of fromIndex isn't in the range, returns -1. public int indexOf (int ch, int fromIndex) - returns the index in the string it is operated on of the first appeareance of the char "ch", as the search begins in the index "fromIndex". If the char "ch" doesn't appear in the string, returns -1. public boolean equals (String str) - returns true if the string that it is operated on is equal to the string str. Otherwise returns false. This method must be written in recursion, without using loops at all. public int compareTo (String str) - compares between the string that the method is operated on to the string "str" that is in the parameter. The method returns 0 if the strings are equal. If the string in the object is smaller lexicographic from the string "str" in the paramater, a negative number will be returned. And if the string in the object is bigger lexicographic from the string "str", a positive number will be returned. public StringList substring (int i) - returns the list of the substring that starts in the place i in the string on which it operates. Meaning, the sub-string from the place i until the end of the string. Assume the value of i is in the right range. public StringList substring (int i, int j) - returns the list of the substring that begins in the place i and ends in the place j (not included) in the string it operates on. Assume the values of i, j are in the right range. public int length() - will return the length of the string on which it operates. Pay attention to all the possible error cases. Write what is the time complexity and space complexity of every method that you wrote. Make sure the methods you wrote are effective. It is NOT allowed to use ready classes of Java. It is NOT allowed to move to string and use string operations.

    Read the article

  • SQL SERVER – Color Coding SQL Server Management Studio Status Bar – SQL in Sixty Seconds #023 – Video

    - by pinaldave
    I often see developers executing the unplanned code on production server when they actually want to execute on the development server. Developers and DBAs get confused because when they use SQL Server Management Studio (SSMS) they forget to pay attention to the server they are connecting. It is very easy to fix this problem. You can select different color for a different server. Once you have different color for different server in the status bar, it will be easier for developer easily notice the server against which they are about to execute the script. Personally when I work on SQL Server development, here is the color code, which I follow. I keep Green for my development server, blue for my staging server and red for my production server. Honestly color coding does not signify much but different color for different server is the key here. More Tips on SSMS in SQL in Sixty Seconds: Generate Script for Schema and Data in SQL Server – SQL in Sixty Seconds #021  Remove Debug Button in SQL Server Management Studio – SQL in Sixty Seconds #020  Three Tricks to Comment T-SQL in SQL Server Management Studio – SQL in Sixty Seconds #019  Importing CSV into SQL Server – SQL in Sixty Seconds #018   Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Book Review: &ldquo;Inside Microsoft SQL Server 2008: T-SQL Querying&rdquo; by Itzik Ben-Gan et al

    - by Sam Abraham
    In the past few weeks, I have been reading “Inside Microsoft SQL Server 2008: T-SQL Querying” by Itzik Ben-Gan et al. In the next few lines, I will be providing a quick book review having finished reading this valuable resource on SQL Server 2008. In this book, the authors have targeted most of the common as well as advanced T-SQL Querying scenarios that one would use for development on a SQL Server database. Book content covered sufficient theory and practice to empower its readers to systematically write better performance-tuned queries. Chapter one introduced a quick refresher of the basics of query processing. Chapters 2 and 3 followed with a thorough coverage of applicable relational algebra concepts which set a good stage for chapter 4 to dive deep into query tuning. Chapter 4 has been my favorite chapter of the book as it provided nice illustrations of the internals of indexes, waits, statistics and query plans. I particularly appreciated the thorough explanation of execution plans which helped clarify some areas I may have not paid particular attention to in the past. The book continues to focus on SQL operators tackling a few in each chapter and covering their internal workings and the best practices to follow when used. Figures and illustrations have been particularly helpful in grasping advanced concepts covered therein. In conclusion, Inside Microsoft SQL Server 2008: T-SQL Querying provided me with 750+ pages of focused, advanced and practical knowledge that has added a few tips and tricks to my arsenal of query tuning strategies. Many thanks to the O’Reilly User Group Program and its support of our West Palm Beach Developers’ Group. --Sam Abraham

    Read the article

  • MVP Summit 2011 summary and thoughts: The &ldquo;I hope I don&rsquo;t cross a line and lose my MVP status&rdquo; post

    - by George Clingerman
    I've been wanting to write this post summarizing my thoughts about the MVP summit but have been dragging my feet since it's a very difficult one to write. However seeing Andy (http://forums.create.msdn.com/forums/t/77625.aspx) and Catalin (http://www.catalinzima.com/2011/03/mvp-summit-2011/) and Chris (http://geekswithblogs.net/cwilliams/archive/2011/03/07/144229.aspx) post about it has encouraged me to finally take the plunge. I'm going to have to write carefully though because I'm going to be dancing around a ton of NDA mine fields as well as having to walk the tight-rope of not sending the wrong message or having people read too much into what I'm saying. I want to note that most of what I'm about to say is just based on my observations, they're not thoughts that Microsoft has asked me to pass along and they're not things I heard Microsoft say. It's just me sharing what I think after going to the MVP summit. Let's start off with a short imaginary question and answer session.     Has the App Hub forums and XBLIG management been rather poor by Microsoft? Yes.     Do I think we're going to see changes to that overnight? No.     Will it continue to look bad from the outside? Somewhat. Confusing right? Well that's kind of how things are right now. Lots of confusion. XNA is doing AWESOME. Like, really, really awesome. As a result of that awesomeness, XNA is on three major platforms: Xbox 360, WP7 and PC. This means that internally Microsoft is really excited and invested in the technology. That's fantastic for XNA and really should show you the future the framework has. It's here to stay. So why are Xbox LIVE Indie Game developers feeling so much pain? The ironic thing is that pain is being caused by the success of XNA. When XNA was just a small thing, there was more freedom and more focus. It was just us and them. We were an only child. Now our family has grown and everyone has and wants some time with XNA. This gets XNA pulled in all directions and as it moves onto new platforms, it plays catch up trying to get those platforms up to speed to where Xbox LIVE Indie Games has grown. Forums, documentation, educational content. They all need to be there because Xbox LIVE Indie Games has all of that and more. Along with the catch up in features/documentation/awesomeness there's the catch up that the people on the team have to play. New platforms and new areas of development mean new players and those new guys don't have the history of being around from the beginning. This leads to a lack of understanding at times just how important some things are because they seem so small and insignificant (Rich Text defaulting for new forum profiles would be one things that jumps to mind). If you're not aware that the forums have become more than just a basic Q&A, if you're not aware that they're a central hub to a very active community, then you don't understand why that small change should be prioritized over something else. New people have to get caught up and figure out how to make a framework and central forum site work for everyone it's now serving. So yeah, a lot of our pain this last year has been simply that XNA is doing well and XBLIG is doing well so the focus was shifted to catch other things up. It hurts when a parent seems to not have any time for you and they're spending some much time with your new baby brother. Growing pains. All families and in our case our product family experience it to some degree. I think as WP7 matures we'll see the team figuring out how to give everyone the right amount of attention. While we're talking about some of our growing pains, it is also important to note (although not really an excuse) that the Xbox LIVE Arcade developers complain about many of the same things that we do. If you paid attention to talks and information coming out of GDC 2011, most of the the XBLA guys were saying things that sounded eerily similar to what the XBLIG developers are saying (Scott Nichols from GayGamer.net noticed http://twitter.com/#!/NaviFairyGG/status/43540379206811650). Does this mean we should just accept the status quo since we're being treated exactly the same? No way. However it DOES show that the way we're being treated is no indication of the stability and future of the platform, it's just Microsoft dropping the communication ball on two playing fields. We're not alone and we're not even being treated worse. Not great, but also in a weird way a very good sign. Now on to a few tidbits I think I CAN share from the summit (I'm really crossing my fingers I'm not stepping over some NDA line I shouldn't be). First, I discovered that the XBLIG user base is bigger than I personally had originally estimated. I won't give the exact numbers (although we did beg Microsoft to release some of these numbers so maybe someday?) but it was much larger than my original guestimates and I was pleasantly surprised. Maybe some of you guys had the right number when you were guessing, but I know that mine was much too low. And even MORE importantly the number of users/shoppers is growing at a steady pace as well. Our market is growing! That was fantastic news and really something that I had to share. On to the community manager discussion. It was mentioned. I was mentioned. I blushed. Nothing more to report there than the blush in my cheeks was a light crimson color. If I ever see a job description posted for that position I have a resume waiting in the wings. I can't deny that I think that would be my dream job... ...so after I finished blushing, the MVPs did make it very, very clear that the communication has to improve. Community manager or not the single biggest pain point with the Xbox LIVE Indie Game community has been a lack of communication. I have seen dramatic improvement in the team responding to MVPs and I'm even seeing more communication from them on the forums so I'm hoping that's a long term change. I really think they understood the issue, the problem remains how to open that communication channel in a way that was sustainable. I think they'll get it figured out and hopefully that's sooner rather than later. During the summit, you may have seen me tweeting about how I was "that guy" (http://twitter.com/#!/clingermangw/status/42740432471470081). You also may have noticed that Andy and Catalin both mentioned me in their summit write ups. I may have come on a bit strong while I was there...went a little out of character for myself. I've been agitated for a while with the way things have been and I've been listening to you guys and hearing you guys be agitated. I'm also watching some really awesome indie game developers looking elsewhere and leaving the platform. Some of them we might not have been able to keep even with changes, but others are only leaving because of perceptions and lack of communication from Microsoft. And that pisses me off. And I let Microsoft know that I was pissed off. You made your list and I took that list and verbalized it. I verbalized the hell out of it. [It was actually mentioned that I'm a lot nicer on the forums and in email than I am in person...I felt bad about that, but I couldn't stay silent]. Hopefully it did something guys, I really did try hard to get the message across. Along with my agitation, I also brought some pride. I mentioned several things in person to the team that I was particularly proud of. From people in the community that are doing an awesome job, to the re-launch of XboxIndies that was going on that week and even gamers like Steven Hurdle (http://writingsofmassdeduction.com/) who have purchased one XBLIG every day for over 100 days now. The community is freaking rocking it and I made sure to highlight that. So in conclusion, I'd just like to say hang in there (you know, like that picture of the cat). If you've been worried about investing in Xbox LIVE Indie Games because you think it's on shaky ground. It's not. Dream Build Play being about the Xbox 360 should have helped a little to point that out. The team is really scrambling around trying to figure things out and make improvements all around. There’s quite a few new gals and guys and it's going to take them time to catch up and there are a lot of constantly shifting priorities. We all have one toy, one team and we're fighting for time with it. It's also time for the community to continue spreading our wings and going out on our own more often. The Indie Game Winter Uprising was a fantastic example of that. We took things into our own hands and it got noticed and Microsoft got behind it. They do every time we stand up and do something (look at how many Microsoft employees tweeted, wrote about the re-launch of XboxIndies.com or the support I've gotten from them for my weekly XNA Notes). XNA is here to stay, it's time for us to stop being scared of that and figure out how to make our own games the successes they should be. There's definitely a list of things that need to be fixed, things that should be improved and I think we should definitely keep vocal about that with Microsoft. Keep it short, focused and prioritized. There's also a lot of things we can do ourselves while we're waiting on them to fix and change things. Lots of ways we can compensate for particular weaknesses in the channel. The kind of stuff that we can step up and do ourselves. Do it on our own, you know, the way Indies always do. And I'm really looking forward to watching us do just that.

    Read the article

  • Pinterest and Social Commerce: The Social Networking Site Retailers Shouldn’t Ignore

    - by Jeri Kelley
    If you are in the midst of remodeling your home, researching the latest spring fashion trends, or simply trying to figure out what to cook for dinner you’ve probably been on Pinterest, and like me, find it extremely useful for generating new ideas and storing them all in one place. Gone are the days of folding over corners of magazines or bookmarking the URL of a Web page – Pinterest makes it easy for you to “pin” ideas, photos, links, and more to virtual bulletin boards where your “followers” can repin, like, and share. As a consumer, Pinterest has gained my attention and I’m definitely not the only one. According to a Monetate infographic, Pinterest’s unique visitors increased 329% from September to December 2011. With this explosion of users, what does it mean for social commerce? Also according to Monetate, Pinterest is one of the top traffic drivers for retailers – driving even more traffic than popular social networking sites like Google+.  For businesses, creating a presence on Pinterest is a great way to extend the reach of your brand, increase inbound links, and drive more traffic to your site. Socialnomics has a great post on how some of the biggest retail brands are using Pinterest to connect with consumers, offer cool content, and engage on a more personal level. When evaluating your social commerce program, while Facebook still delivers the most referrals, Pinterest shouldn’t be ignored as a way to help reach and connect with as many consumers as possible.

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >