Search Results

Search found 56854 results on 2275 pages for 'temporary internet files'.

Page 109/2275 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Ubuntu Server - Virtual Box?

    - by user186144
    So I got VirtualBox just for Ubuntu Server, I'm currently Running Ubuntu 12.04 as my main OS. But when I got everything set up in Ubuntu Server 12.04, including my ports forwarded and a test Minecraft server up... I realized that nobody could join my public IP I sent out, not even me! I can connect to the ipv4 address. It acts like I didn't forward my ports. But I forwarded 25565 to Input and Output and they're both TCP and UDP. Is this just a virtual box issue or am I doing something wrong? * Using Eth0 wired connection

    Read the article

  • CSS padding displays in FF and Chrome, but not in IE 8? [migrated]

    - by bullitt five
    I'm updating the CSS on a page design, trying to put borders around my images, with 7px of padding between the image and the border. It seems to be working fine in Firefox and Chrome, but IE displays the border directly against the image, with no padding. Any suggestions? CSS code: img.right { float: right; margin: 0px; border: 1px solid #999; padding: 7px; margin-left: 10px; margin-bottom: 5px; } HTML: <img src="images/homepage_challengecoin.jpg" class="right">

    Read the article

  • Hide .desktop files from shares in VirtualBox

    - by Oli
    I share my desktop with VirtualBox. It allows me to work on current files in a nice easy way. I have quite a few utility launchers on my desktop. It's only a dozen or so at peak time but it makes navigating the list of real files a little harder when I'm working from Windows. I was wondering if there was a way of excluding the files from the share. Either at VirtualBox (I've no idea where it keeps its samba configuration -- or if it actually uses samba at all for that matter) or in Windows.

    Read the article

  • Registry settings for disabling autocomplete options in IE8

    - by redphoenix
    I have a script that disables all the autocomplete settings in IE6 and IE7 but with IE8 there are new settings that aren't disabled by the script. The settings are under Tools-Content-AutoComplete-Settings The current registry keys I'm modifying are HKCU\Software\Microsoft\Internet Explorer\Main\FormSuggest Passwords "no" HKCU\Software\Microsoft\Internet Explorer\Main\FormSuggest PW Ask "no" HKCU\Software\Microsoft\Internet Explorer\Main\Use FormSuggest "no" HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\AutoComplete\AutoSuggest "no" The settings that are not unchecked by these keys are Browsing History and Favorites. Does anyone know the registry settings that are associated with these IE settings?

    Read the article

  • How do I connect my Samsung 6 Series TV to network through a proxy?

    - by JGC
    I have a Samsung 6 series LCD TV which can connect to internet by LAN. When I connect my TV to my Windows 7 laptop which get its internet from AS share it, it can connect to the Internet. My TV can connect to YouTube, but in my country this site is filtered. I want to use an antifilter(proxy) program to bypass the filtering. The problem is the TV does not recognize the proxy port or program. How can I configure the TV or the network to use the proxy?

    Read the article

  • PowerShell script to find files that are consuming the most disk space

    As you know, SQL Server databases and backup files can take up a lot of disk space. When disk is running low and you need to troubleshoot disk space issues, the first thing to do is to find large files that are consuming disk space. In this article I will show you a PowerShell script that you can use to find large files on your disks. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • How to connect to two VPN connections at the same time in Mac?

    - by Sallar Kaboli
    Hello, I have a problem with my vpn connections, my ISP requires me to connect to a private vpn server of it's own in order to connect to internet, im connected to main network via WiFi, but i also want to connect to another VPN network of my choice (to bypass internet limitation in my country of course), i am able to do that in Windows, but its not working in Mac OS X Snow Leopard second vpn connecion type is PPTP / CHAP How can i do that? as u see in the pic, both vpn's are connected, first one is for main internet which is working properly , second one is also connected but its not affecting anything. its just connected. http://www.freezpic.com/pics/a5231c3e80501a3c25430f43c1ef5856.png

    Read the article

  • What security changes are necessary when connecting DSL modem directly to PC instead of router?

    - by Mike B
    Windows XP I have a user with a single PC that was connected to the internet via a standard home router. The router is now having hardware-related issues and to save money, they're considering connecting the PC directly to the DSL modem since they don't need to share the internet connection or need wireless functionality. If they decide to do that, I'm concerned that this will introduce additional security concerns. Is the Windows Firewall sufficient and Microsoft Security Essentials sufficient for protecting a computer directly connected to a DSL Modem? Or is other security software needed here? Ideally, I'd like to avoid having third-party firewall software constantly bringing up alerts and asking them to approve everything. Also, just to clarify, their use cases are just internet browsing and email.

    Read the article

  • System.getProperty("user.dir") cannot get my project root path ,but the path which my eclipse is located

    - by facebook-100005613813158
    As the title goes , I have class named GetException.java,inside it ,I read a xml file in a static code block like(Because this document is shared): static{ ... document = db.parse(new File(System.getProperty("user.dir")+"/src/exception/ExceptionCode.xml")); ... } To test if the file path is correct, I write a main function just inside GetException.java, it proves that the path is correct ,xml file can be read successfully. My project root dir is "/home/wuchang/workspace/MongodbI". But When this Class is loaded from other class,such as I called one of its static functions , it reports the error message: /home/mrs/??/eclipse/src/exception/ExceptionCode.xml (No such file or directory) /home/mrs/??/eclipse/ is actually my eclipse installation directory.So , I wander how System.getProperty("user.dir") returned the eclipse installation directory to me ,instead of my project root directory?

    Read the article

  • Files Not Uploading

    - by Howdy_McGee
    So I'm running wordpress. I will make changes to theme files and upload them successfully but none of the changes show up on the actual website. At first I thought it was the wrong theme but I have specific theme files I created that are in there so I'm using the right theme. Then I thought it was a server problem and maybe the companies server are down so I checked a few other websites and updated information just fine. All on linux servers. Then I jumped to Wordpress to make sure it wasn't a wordpress problem but I can update the files fine from their Admin Panel. I checked to make sure it wasn't Filezilla Caches or Browser Caches so I cleared them both. What could be the problem? If it had to deal with the Filezilla client permissions I imagine I would get an error upon uploading but it uploads just fine. Suggestions would be extremely helpful I have no clue.

    Read the article

  • Access to my files on Android

    - by user18644
    I am thinking of subscribing to Dropbox which is slightly more costly than Ubuntu one but I need access to my files on the go, and I prefer to use my smartphone to my netbook most of the time as I like to travel light. I do not want to stream music, I want access to my files only. Whereas there is a free app for Dropbox to access said files, there isn't one for Ubuntu. I would be prepared to wait a while if you have got this in hand, have you actually given this any thought? Please tell me whether I should ignore Ubuntu One and link up with Dropbox?

    Read the article

  • Since my upgrade to 12.10 I don't have an Internetconection, but a networkconnection. Why? [closed]

    - by Klemens
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? Because of a few problems i had with ubuntu, I hoped that they would solve themselve by upgrading my system to v12.10. Now they are solved, jeeehaaa, but now I have a new problem. My Network-manager tells me, that I'm connected, but I don't have an Internetconnection. And now I don't know what to do. Help please. By the way, I'm online with my netbook.

    Read the article

  • Downloading PDB file for machine not connected to the internet

    - by Saar
    For deubgging some process I will need to download the PDB files for some operating system dll files (OLE32, NTDLL, etc.) That server is not connected to the internet. I know the following method to get the PDB. Get full dump of the process Copy the dump to another machine where internet connection is available Use .reload to download the PDB files Copy the PDB files from the local PDB store Is there any thing more simple or do I have to go through the dumping process? Thanks Saar

    Read the article

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • Best method of transferring files over internet?

    - by EsotericHabit
    I have a seedbox (running Ubuntu 9.10) at my (parent's) house and will be leaving it there once I go to college this fall. Currently I'm using samba to transfer files between computers, but I was wondering if once I am on my university's network, whether using FTP would be a better option versus samba over a VPN. The files will range from 100 MB to 17 GB, if that matters. Would one be more efficient over the other? Did I forget any other options?

    Read the article

  • Deletion of SQL Profiler Trace files (.trc)

    - by Mark
    We've noticed a lot of .trc files in our SQL data folder (\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data) on our server. The date range for these files spans over one day and the total file size of all files together is about 21 gigs. I'd like to free up this space but I'm not sure if I can just delete the files manually through Windows Explorer or if I need to do anything in SQL, like run a command or script. Any ideas?

    Read the article

  • How to merge many text files data in databse

    - by Mirage
    i have around 100 text files. The files have questions and 3 choices. FIles are like below ab001.txt -- contains question ab001a.txt -- is the first choice ab001b.txt ---is second choice ab001c.txt --- is third choice There are thousnad files like this. now i want to insert them in sql or first may in excel like First columns questions and other three columns as answers First two characters are same for soom files , looks like it signifies osme category so around every 30 questioons have same first charaters Any ideas

    Read the article

  • Moving directories full of files over the top

    - by JavaRocky
    I took a backup of a directory which has a number directories and files inside them. Recently some files have gone missing. I would like to just move over the missing files. I prefer moving files instead of just copying as space is a premium on this particular box and the files are quite large. How can i achieve this?

    Read the article

  • Multiple [Stand-alone] zip files creation?

    - by im_chc
    How can I automatically zip a group of files into multiple zip files (say, 2mb in size for each file), and that each zip file is a stand-alone zip file? (i.e. not mult-volume zip files, that you can't lost any one of the files, otherwise you can't unzip) Is there any tools available to do so? Actually I just need to group the files into many groups, 2mb each etc, zipped or not zipped doesn't matter thx!

    Read the article

  • How to delete duplicate restored user files with "(2)" added (Win7)

    - by user332172
    How to delete duplicate restored user files with "(2)" added (Win7) I restored my user files on windows 7 system from the Win 7 backup. I selected the wrong restore option and all files were restored. Existing unchanged files were restored with the text string " (2)". Is there a way to write a batchfile or script to do this operation? Example file name: "01 lesson 1" "01 lesson 1 (2)" I want to delete all files which had " (2)" appended on restore.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Net Neutrality FAIL [closed]

    - by leeand00
    I know I'll get into all kinds of trouble for bringing this up on SO, but considering that nearly all of us programmers depend on the Internet to get our jobs done, I really think it's worth looking into today's failure of our right to use the Internet by way of Lobbying ISPs. Although something tells me there will be retribution for the actions of the ISPs/tel cos/cable and their lobbyist since, lets face it...ISPs/Telcos didn't invent the Internet. I'm not going to be the one to do it, but um I think somebody already has...as everybody I talked to was having Internet connection problems today at work. Just thought this might be relevant to all of our jobs...in the U.S.A. at least. If you work at an Big evil ISPs, by all means...try and close this question. If you don't...and your just a chap who enjoys your Internet access...please RT this: Contact The Democrats Who Are Against Net Neutrality (Full List W/ Contact Info) http://bit.ly/aMSV0W #NetNeutralityFAIL net-neutrality

    Read the article

  • Personal Technology – Excel Tip: Comparing Excel Files

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versionsfrom SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I have been writing about Excel Tips over my blog and thought it would be great to share one interesting tips here as a guest blog here. Assume a situation where you want to compare multiple excel files. Here is a typical scenario I have encountered as a common activity. Assume you are sending an Excel file with tons of data, formulae and multiple sheets. Now you are requesting your colleague to validate the file and if required change content for correctness. After receiving the file from your colleague, now you want to know what changes were made by this person to your document. Now here is a cool new addition to Excel 2013 that can help you achieve this task. To get to this option, click the INQUIRE Tab. Incase you don’t have the INQUIRE Tab, check Options using INQUIRE blog. In that post, we discuss all the other options of INQUIRE tab. Once you are on the INQUIRE Tab, select “Compare Files” button as shown in the figure above. This brings a dialog as below. If you are on Windows 8 or Windows 7 OS, search for an application called “Spreadsheet Compare 2013”. Ultimately both the options lead us to the same application. If you are using the stand alone app, once the App initializes, click the “Compare files” options from the toolbar. Make sure to give two different Excel files as shown in the figure above. After selecting the Excel Sheets, you can see the Compare tool has a number of other options to play from. We will talk about some of them later in this post. Just below our toolbar is a colorful side-by-side comparison of both our excel sheets. We can also see the various Tab’s from each file. There is a meaning for each of our color coding which will be discussed next. As you saw above, the color coding has a meaning. For example the bottom pane lists each of the color coding and most importantly each of the changes as compared side-by-side. The detailed information shown below can be exported using the “Export Results” options from the toolbar as a separate Excel Workbook or can be copied to clipboard to be used later. The final piece of the puzzle is to show a graphical view of these differences results based on each category. We cannot drill down per se, but this is a great way to know that the maximum changes seem to be based on “Cell Formats” and then few “Calculated Values” have changed. The INQUIRE option and Spreadsheet Compare 2013 tool is part of Excel 2013. So as you explore using the new version of Excel, there are many such hidden features that are worth exploring. Do let us know if you enjoyed learning a new feature today and I hope you will play around with this feature in your day-today challenges when working with Excel files. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Excel, Personal Technology

    Read the article

  • Juniper SSG-5 subinterface vlan routing to the internet

    - by catfish
    I'm unable to get a brand new Juniper SSG-5 with latest 6.3.0r05 firmware routing to the internet from a subinterface I created on bgroup0 setup as vlan2 (bgroup0.1 on "wifi" zone). When connected on the default vlan it gets on the internet just fine. When I switch to vlan2 I'm unable to get to the internet. I am able to get the correct ip address (10.150.0.0/24) from dhcp, able to get to the juniper management page, etc but nothing past the firewall, can't ping 4.2.2.2 or the internet gateway. Even setting up logging on the wifi-to-untrust policy and it does shows the attempts (it's it's timeouts). 172.31.16.0/24 is the untrusted lan, it's already nat'ed but works fine for testing. Can ping this ip from the default vlan but not from vlan2 192.168.1.0/24 is the trusted main lan 10.150.0.0/24 is the wifi isolated lan on vlan2 The idea is to setup an AP with lan and guest access (AP supports multiple ssid's on different vlans). I know I can setup the juniper to use different ports for the wifi lan and use their procurve switch to do the vlan separation, but I never used vlan'ing on a Juniper firewall and I would like to try it out this way. Here is the complete config file: unset key protection enable set clock timezone -5 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "xxxxxxxxxxxxxxxx" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone id 100 "Wifi" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst unset zone "Wifi" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Untrust" set interface "bgroup0" zone "Trust" set interface "bgroup0.1" tag 2 zone "Wifi" set interface "bgroup1" zone "DMZ" set interface bgroup0 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup0 port ethernet0/5 set interface bgroup0 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 172.31.16.243/24 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup0.1 ip 10.150.0.1/24 set interface bgroup0.1 nat set interface bgroup0.1 mtu 1500 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup0.1 ip manageable set interface ethernet0/0 manage ping set interface ethernet0/1 manage ping set interface bgroup0.1 manage ping set interface bgroup0.1 manage telnet set interface bgroup0.1 manage web unset interface bgroup1 manage ping set interface bgroup0 dhcp server service set interface bgroup0.1 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup0.1 dhcp server enable set interface bgroup0 dhcp server option gateway 192.168.1.1 set interface bgroup0 dhcp server option netmask 255.255.255.0 set interface bgroup0 dhcp server option dns1 8.8.8.8 set interface bgroup0.1 dhcp server option lease 1440 set interface bgroup0.1 dhcp server option gateway 10.150.0.1 set interface bgroup0.1 dhcp server option netmask 255.255.255.0 set interface bgroup0.1 dhcp server option dns1 8.8.8.8 set interface bgroup0 dhcp server ip 192.168.1.33 to 192.168.1.126 set interface bgroup0.1 dhcp server ip 10.150.0.50 to 10.150.0.100 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup0.1 dhcp server config next-server-ip set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow no-tcp-seq-check set flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Wifi" to "Untrust" "Any" "Any" "ANY" permit log set policy id 2 exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set snmpv3 local-engine id "0162122009006149" set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route set route 0.0.0.0/0 interface ethernet0/0 gateway 172.31.16.1 exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit

    Read the article

  • Accessing Squid Proxy over internet

    - by prateekdayal
    Hi, I recently finished installing Squid on a VPS I have in the US and its working fine locally (I verified by setting http_proxy variable and using lynx). I want to access this proxy over the internet (as an anonymizer) so that I can see how some ads show up for US traffic on my website. I have setup authentication so abuse is not a problem. However, I am not able to access the proxy over the internet. I have set the following rule in squid.conf http_access allow all Is this not possible to do what I want or I am missing something? The port 3128 is open in the firewall so that is not an issue. Squid is running on 0.0.0.0 Thanks Prateek

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >