Search Results

Search found 32789 results on 1312 pages for 'object relational mapping'.

Page 639/1312 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • Vlans and subinterfaces

    - by Adeodatus
    I've inherited a moderate size network that I'm trying to bring some sanity to. Basically, its 8 public class Cs and a slew of private ranges all on one vlan (vlan1, of course). Most of the network is located throughout dark sites. I need to start separating some of the network. I've changed the ports from the main cisco switch (3560) to the cisco router (3825) and the other remote switches to trunking with dot1q encapsulation. I'd like to start moving a few select subnets to different vlans. To get some of the different services provided on our address space (and to separate customers) on to different vlans, do I need to create a subinterface on the router for each vlan and, if so, how do I get the switch port to work on a specific vlan? Keep in mind, these are dark sites and geting console access is difficult if not impossible at the moment. I was planning on creating a subinterface on the router for each vlan then setting the ports with services I want to move to a different vlan to allow only that vlan. Example of vlan3: 3825: interface GigabitEthernet0/1.3 description Vlan-3 encapsulation dot1Q 3 ip address 192.168.0.81 255.255.255.240 the connection between the switch and router: interface GigabitEthernet0/48 description Core-router switchport trunk encapsulation dot1q switchport mode trunk show interfaces gi0/48 switchport Name: Gi0/48 Switchport: Enabled Administrative Mode: trunk Operational Mode: trunk Administrative Trunking Encapsulation: dot1q Operational Trunking Encapsulation: dot1q Negotiation of Trunking: On Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 1 (default) Administrative Native VLAN tagging: enabled Voice VLAN: none Administrative private-vlan host-association: none Administrative private-vlan mapping: none Administrative private-vlan trunk native VLAN: none Administrative private-vlan trunk Native VLAN tagging: enabled Administrative private-vlan trunk encapsulation: dot1q Administrative private-vlan trunk normal VLANs: none Administrative private-vlan trunk private VLANs: none Operational private-vlan: none Trunking VLANs Enabled: ALL Pruning VLANs Enabled: 2-1001 Capture Mode Disabled Capture VLANs Allowed: ALL Protected: false Unknown unicast blocked: disabled Unknown multicast blocked: disabled Appliance trust: none So, if the boxen hanging off of gi0/18 on the 3560 are on an unmanaged layer2 switch and all within the 192.168.0.82-95 range and are using 192.168.0.81 as their gateway, what is left to do, especially to gi0/18, to get this working on vlan3? Are there any recommendations for a better setup without taking everything offline?

    Read the article

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • Postfix: How do I Make Email Aliases Work?

    - by Nick
    The documentation claims that I can add aliases in a file (like /etc/postfix/virtusertable) and then use the "virtual_maps" directive to point to it. This does not appear to be working, however. My mail is bouncing with: Recipient address rejected: User unknown in local recipient table; If I mail the user from the server using the mail command, it works. mail myuser The message goes through postfix and inserts itself in the Cyrus inbox correctly. When I use fetchmail to get the user's messages off a pop3 server, postfix fails. The user's email is "[email protected]", but it doesn't seem to be mapping correctly to "myuser", the cyrus mailbox name. /etc/postfix/main.cf myhostname = localhost alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all mailbox_transport = lmtp:unix:/var/run/cyrus/socket/lmtp #lmtp:unix:/var/run/lmtp virtual_alias_domains = mydomain.com virtual_maps = hash:/etc/postfix/virtusertable /etc/fetchmailrc et syslog; set daemon 20; poll "mail.pop3server.com" with protocol pop3 user "[email protected]" password "12345" is "myuser" fetchall keep /etc/postfix/virtusertable [email protected] myuser postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_size_limit = 0 mailbox_transport = lmtp:unix:/var/run/cyrus/socket/lmtp mydestination = localhost myhostname = localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes virtual_alias_domains = mydomain.com Why is it ignoring my alias?

    Read the article

  • FastCGI Error when installing PHP on IIS7.5

    - by ytoledano
    I'm trying to install MediaWiki on a Win2008r2 server, but can't manage to install PHP. Here's what I did: Grabbed a Zip archive of PHP and unzipped it into C:\PHP. Created two subdirs: c:\PHP\sessiondata and c:\PHP\uploadtemp. Granted modify rights to the IUSR account for the subdirs. Copied php.ini-production as php.ini Edited php.ini and made the following changes: fastcgi.impersonate = 1 cgi.fix_pathinfo = 1 cgi.force_redirect = 0 open_basedir = "c:\inetpub\wwwroot;c:\PHP\uploadtemp;C:\PHP\sessiondata" extension = php_mysql.dll extension_dir = "./ext" upload_tmp_dir = C:\PHP\uploadtemp session.save_path = C:\php\sessiondata Install Web server role, selected CGI and HTTP Redirection options. In the Handler Mappings: Added Module Mapping. Entered the following values: Path = *.php, Module = FastCgiModule, Executable = c:\php\php-cgi.exe, Name = PHP via FastCGI. Created a test page into wwwroot directory: phpinfo.php and set the contents like this: < ?php phpinfo(); ? Browsed to http://localhost/phpinfo.php But then I get: HTTP Error 500.0 - Internal Server Error An unknown FastCGI error occured Detailed Error Information Module: FastCgiModule Notification: ExecuteRequestHandler Handler: PHP via FastCGI Error Code: 0x800736b1 Requested URL: http://localhost:80/phpinfo.php Physical Path: C:\inetpub\wwwroot\phpinfo.php Logon Method: Anonymous Logon User: Anonymous Does anyone know what I'm doing wrong here? Thanks.

    Read the article

  • How to convert an MKV to AVI with minimal loss

    - by Linux Jedi
    To convert an MKV to AVI, I do two things. The first thing I do is this: ffmpeg -i filename.mkv -vcodec copy -acodec copy output.avi This converts the MKV to an AVI, but the problem is that the video does not play smoothly for some reason. That's fine, because if I do one more thing it gets fixed: ffmpeg -i output.avi -vcodec mpeg4 -b 4000k -acodec mp2 -ab 320k converted.avi After I do this then the file plays without problem. I had success doing it this way for one file, but then I tried it on another file, and there is a slight, but noticeable loss in video quality. This is the output I get when doing the second step: FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers built on Dec 29 2010 18:02:10 with gcc 4.2.1 (Apple Inc. build 5664) configuration: libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.11. 0 / 0.11. 0 Seems stream 0 codec frame rate differs from container frame rate: 359.00 (359/1) -> 29.92 (359/12) Input #0, avi, from 'output.avi': Metadata: ISFT : Lavf52.64.2 Duration: 00:04:17.21, start: 0.000000, bitrate: 3074 kb/s Stream #0.0: Video: mpeg4, yuv420p, 704x480 [PAR 229:189 DAR 5038:2835], 29.92 fps, 29.92 tbr, 29.92 tbn, 359 tbc Stream #0.1: Audio: vorbis, 48000 Hz, stereo, s16 Output #0, avi, to 'nidome_no_kanojo.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 704x480 [PAR 229:189 DAR 5038:2835], q=2-31, 4000 kb/s, 29.92 tbn, 29.92 tbc Stream #0.1: Audio: mp2, 48000 Hz, stereo, s16, 320 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 I just used arbitrarily large settings on the second step and it worked nicely before but not in this case. What settings should I use?

    Read the article

  • How is a subdomain passed to the webserver?

    - by Joshua Frank
    I know that dns resolves an address like example.com to an IP address like 11.22.33.44, but I'm a little confused about how subdomains are resolved, so that when you type http://subdomain.example.com, what actually gets passed to the server at 11.22.33.44? In other words, example.com = 11.22.33.44, but subdomain.example.com/path = ??? Are "subdomain" and "path" passed as http headers, or mapped in the url in some way, or what? Thanks in advance. Edit: If I'm understanding correctly, BloodPhilia says that subdomain.example.com actually is a different domain that in principle could resolve to a totally different IP. But if that's so, then what about hosts that have huge numbers of (what look like) subdomains, but which actually map to some path on the site. For instance, blogspot hosts millions of blogs, and they all look like this: aaa.blogspot.com bbb.blogspot.com ...millions more... yyy.blogspot.com zzz.blogspot.com Those are clearly not subdomains with their own IP's, but rather some mapping like aaa.blogspot.com -- www.blogspot.com/aaa, but how is this accomplished? What actually gets passed to the web server at blogspot.com?

    Read the article

  • GA 8KNXP Rev1.0: 4GB installed, only 3.5 recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4GB. The specs from the manual say: Maximum memory support: 4GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • convert decrypted .vobs to .avi with ffmpeg on ubuntu

    - by Arcath
    I have a .vob file that has bee ripped from a dvd, when I watch the .vob its very good quality video and 5.1 english audio but when I use ffmpeg it has rubbish video and mono french audio. That was using this command: ffmpeg -i /samba/ripping/vobs/12161840#2.vob -f avi /samba/ripping/avis/test.avi I've tried a few different variations on that but it never comes back with anything good just bigger files with bad video and incorrect sound. I know the videos good and the correct audio streams exist so how do I select a 5.1 track and get good video? ffmpeg gives the .vob details as: Input #0, mpeg, from '/samba/ripping/vobs/12161840#2.vob': Duration: 00:42:05.56, start: 0.287267, bitrate: 5738 kb/s Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720x576 [PAR 64:45 DAR 16:9], 8436 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0.1[0x80]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.2[0x81]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.3[0x82]: Audio: ac3, 48000 Hz, mono, s16, 192 kb/s Output #0, avi, to '/samba/ripping/avis/test.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 720x576 [PAR 64:45 DAR 16:9], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, mono, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.3 -> #0.1

    Read the article

  • avconv gets killed if mkv has subtitles

    - by Lukas Knuth
    What I'm trying to do is to take a movie (in an Matroska container), convert all audio tracks to AC3 and don't touch anything else. I'm using this line: avconv -i infile.mkv -map 0 -vcodec copy -scodec copy -acodec ac3 -ab 256k outfile.mkv This works fine, except when there are subtitles embedded. Then, after some time processing with no progress, avconv just "dies" (output shortened, these seem to be the interesting parts): [matroska,webm @ 0xf867a0] max_analyze_duration reached [matroska,webm @ 0xf867a0] Estimating duration from bitrate, this may be inaccurate ... Incompatible sample format 's16' for codec 'ac3', auto-selecting format 'flt' ... Stream #0.0(eng): Video: H264 / 0x34363248, yuv420p, 1280x536 [PAR 1:1 DAR 160:67], q=2-31, 1k tbn, 1k tbc (default) Stream #0.1(ger): Audio: ac3, 48000 Hz, 5.1, flt, 256 kb/s (default) Stream #0.2(eng): Audio: ac3, 48000 Hz, 5.1, flt, 256 kb/s Stream #0.3(ger): Subtitle: dvdsub (default) (forced) Metadata: title : forced Stream #0.4(ger): Subtitle: dvdsub Metadata: title : complete Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (dca -> ac3) Stream #0:2 -> #0:2 (dca -> ac3) Stream #0:3 -> #0:3 (copy) Stream #0:4 -> #0:4 (copy) Input stream #0:2 frame changed from rate:48000 fmt:s16 ch:6 to rate:48000 fmt:flt ch:6 Input stream #0:1 frame changed from rate:48000 fmt:s16 ch:6 to rate:48000 fmt:flt ch:6 frame= 2606 fps=1303 q=-1.0 size= 3kB time=107.36 bitrate= 0.2kbits/s ... frame=96141 fps=813 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s frame=96251 fps=810 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s ... frame=97015 fps=397 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s Getötet ["Killed", in English] I have no idea why this happens, as there is no error-output. I'd like to just copy the subtitles over, not touch them at all. If that won't work, they can be completely dropped.

    Read the article

  • cannot connect to <server_name>\sqlexpress

    - by Jackson Sunuwar
    I have tried disabling firing wall and checked sqlbrowser is started but for some reason I cannnot connect to my datbase... called server_name\sqlexpress.. I have a virtual machine and a full scale MS SQL Server 2008 R2 running on it... and I have several other vm running sqlexpress. they run fine and I can connect to them using sqlexpress... but when i try to access from sqlserver... I get this error. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) Digging deep into the error, I found this Error Number: -1 Severity: 20 State: 0 and finally this... Program Location: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.SqlServer.Management.SqlStudio.Explorer.ObjectExplorerService.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser() Firewall is turned off on the VM that's running mssqlserver... I turned of firewall on one of the vm that's running the sqlexpress but I still get the error... can someone please help... thank you

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • How do I convert a video to GIF using ffmpeg, with reasonable quality?

    - by Kamil Hismatullin
    I'm converting .flv movie to .gif file with ffmpeg. ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif It works great, but output gif file has a very law quality. Any ideas how can I improve quality of converted gif? Output of command: $ ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif ffmpeg version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:52:53 with gcc 4.7.2 *** THIS PROGRAM IS DEPRECATED *** This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.flv': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 Duration: 00:00:18.85, start: 0.000000, bitrate: 3098 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x720, 2905 kb/s, 25 fps, 25 tbr, 50 tbn, 50 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 192 kb/s Metadata: creation_time : 2013-02-14 04:00:07 [buffer @ 0x92a8ea0] w:1280 h:720 pixfmt:yuv420p [scale @ 0x9215100] w:1280 h:720 fmt:yuv420p -> w:320 h:240 fmt:rgb24 flags:0x4 Output #0, gif, to 'output.gif': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 encoder : Lavf53.21.1 Stream #0.0(und): Video: rawvideo, rgb24, 320x240, q=2-31, 200 kb/s, 90k tbn, 10 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream mapping: Stream #0.0 -> #0.0 Press ctrl-c to stop encoding frame= 101 fps= 32 q=0.0 Lsize= 8686kB time=10.10 bitrate=7045.0kbits/s dup=0 drop=149 video:22725kB audio:0kB global headers:0kB muxing overhead -61.778676% Thanks.

    Read the article

  • One-To-Many Powershell Scripts

    - by Matt
    I'm trying to create a script to run as a scheduled task, which will run against multiple servers and retrieve some information. To start with, I populate the list of servers by querying AD for all servers that match a certain set of criteria, using Get-ADComputer. The problem is, the list is returned as an object, which I can't then pass to the New-PSSession list. I have tried converting it to a comma-seperated string by doing the following: foreach ($server in $serverlist) {$newlist += $server.Name + ","} but this still doesn't work. the alternative is to iterate through the list and run the various commands against each server one at a time, but my preference would be to avoid this and run them using one-to-many remoting. UPDATE: To clarify what I want to end up being able to do is using -ComputerName $serverlist, so I want $serverlist to be a string rather than an object. UPDATE 2: Thanks for all the suggestions. Between them and my original method I'm starting to wonder whether -ComputerName can accept a string variable? I've got varying degrees of success getting the list of computers converted to a comma separated string, but no matter how I do it I always get invalid network address.

    Read the article

  • ffmpeg - h264 to xvid creates large file

    - by fatnic
    I'm trying to use ffmpeg to convert a h264/aac video file to an xvid/mp3 file so I can play it in my ultra-cheap media player. At the moment the converted video file is TWICE the size of the original mp4. Is there any way to get a smaller file size without loosing too much quality? Even a drop to -qmin 1 is pretty awful! The command i'm using is ffmpeg -i input.mp4 -vcodec libxvid -sameq -acodec libmp3lame -ab 128k -ac 2 output.avi And the ffmpeg output is Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4' Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 Duration: 01:34:27.69, start: 0.000000, bitrate: 1520 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x304 [PAR 1:1 DAR 45:19], 1387 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0(und): Video: mpeg4, yuv420p, 720x304 [PAR 1:1 DAR 45:19], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1(und): Audio: libmp3lame, 48000 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1

    Read the article

  • YouTube Scalability Lessons

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Very interesting blog post by Todd Hoff at highscalability.com presenting “7 Years of YouTube Scalability Lessons in 30 min” based on a presentation from Mike Solomon, one of the original engineers at YouTube: …. The key takeaway away of the talk for me was doing a lot with really simple tools. While many teams are moving on to more complex ecosystems, YouTube really does keep it simple. They program primarily in Python, use MySQL as their database, they’ve stuck with Apache, and even new features for such a massive site start as a very simple Python program. That doesn’t mean YouTube doesn’t do cool stuff, they do, but what makes everything work together is more a philosophy or a way of doing things than technological hocus pocus. What made YouTube into one of the world’s largest websites? Read on and see... Stats @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } 4 billion Views a day 60 hours of video is uploaded every minute 350+ million devices are YouTube enabled Revenue double in 2010 The number of videos has gone up 9 orders of magnitude and the number of developers has only gone up two orders of magnitude. 1 million lines of Python code Stack @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Python - most of the lines of code for YouTube are still in Python. Everytime you watch a YouTube video you are executing a bunch of Python code. Apache - when you think you need to get rid of it, you don’t. Apache is a real rockstar technology at YouTube because they keep it simple. Every request goes through Apache. Linux - the benefit of Linux is there’s always a way to get in and see how your system is behaving. No matter how bad your app is behaving, you can take a look at it with Linux tools like strace and tcpdump. MySQL - is used a lot. When you watch a video you are getting data from MySQL. Sometime it’s used a relational database or a blob store. It’s about tuning and making choices about how you organize your data. Vitess- a  new project released by YouTube, written in Go, it’s a frontend to MySQL. It does a lot of optimization on the fly, it rewrites queries and acts as a proxy. Currently it serves every YouTube database request. It’s RPC based. Zookeeper - a distributed lock server. It’s used for configuration. Really interesting piece of technology. Hard to use correctly so read the manual Wiseguy - a CGI servlet container. Spitfire - a templating system. It has an abstract syntax tree that let’s them do transformations to make things go faster. Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation, which is 10-15 time faster than the one you can download. ...Contiues. Read the blog Watch the video

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Are there any Graphical PowerShell tools?

    - by Dai
    As a developer for the .NET platform, I like to "explore" a platform, framework or API by browsing through the API documentation which explains what everything is - everything is covered and when I use tools like Reflector or Object Browser then I get to know for certain what I'm working with. When I'm writing my own software I can use tools like the Object Test Bench to explore and work with my classes directly. I'm looking for something similar, but for PowerShell - and ones that avoid text-mode. PowerShell is nice, and there are a lot of cool "discoverability"-things it has, such as the "Verb-Noun" syntax, however when I'm working with Exchange Server, for example, I wanted to get a list of AD Permissions on a Receive Connector and I got this list: [PS] C:\Windows\system32>Get-ADPermission "Client SVR6" -User "NT AUTHORITY\Authenticated Users" | fl User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : True AccessRights : {ReadProperty} IsInherited : True Properties : {ms-Exch-Availability-User-Password} ChildObjectTypes : InheritedObjectType : ms-Exch-Availability-Address-Space InheritanceType : Descendents [PS] C:\Windows\system32> Note how the first few entries contain identical text - there's no way to tell them apart easily. But if there was a GUI presumably it would let me drill-down into the differences better. Are there any tools that do this?

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

  • Why is a SUBST'd drive inaccessible via shortcut or Run menu, but works fine from My Computer?

    - by Kev
    I have shortcuts to C:, D:, and E: in my quick launch bar. C and E work fine when I click on them, but D does nothing (that I can see) when I click on it. D and E are both SUBST'd drives pointing to folders that happen to be network shares. (I do this rather than mapping them so it doesn't have to go through the network layer--that way it works faster and I still get recycle bin functionality, etc.) If I go Start-Run and type D: or D:\, I get an error box saying This file does not have a program associated with it for performing this action. Create an association in the Folder Options control panel. If I go to My Computer and double-click the D drive, it comes up fine. Also, if I type \\servername\sharename pointing to the same place, it comes up fine. This just started happening this morning, out of the blue. It has been working fine ever since I set it up. Why might this be?

    Read the article

  • Managing arbitrary user permissions under PureFTPd

    - by Sebastián Grignoli
    I need to provide an FTP service that needs to be web-managed in the simplest way possible. My customer wants to create folders and users, and give them read only or read/write access arbitrarily. For example: The folder 'Documents' should be read only for several users, writable for internal users, and invisible for the rest. The folder 'Pictures' should be read only for journalists, writable for associates, and invisible for the rest. The folder 'Media' should be read only, writable or invisible for arbitrary users specified on the admin. There could be a large number of users and folders. I can't find a good way to accomplish that. I thought that I could give each user a home folder and put symlinks for the folders he has read access to, and make the user part of the folder's group when he has write access too, but now I think that this wouldn't work, because with PureFTPd (or ProFTPd) I can only specify the virtual user's mapping to a system user, and only one GUID for each virtual user. My approach requires that I could specify several GUIDs for each user (one by each folder he has write access to). I need to start programming this admin and I still don't know wich approach would work, if any. ¿Any ideas?

    Read the article

  • AVConv increases song duration when converting MP3

    - by chauffch
    I am struggling with the following issue. I want to convert an MP3 ADTS into pure a MP3. I am using AVConv on Ubuntu 12.10. The outcome is a file that has the same size, but the duration is now longer. $ ls -l total 6436 -rw-r--r-- 1 teuf teuf 6586514 nov. 25 09:25 Blindsided_Bon_Iver.mpga $ file Blindsided_Bon_Iver.mpga Blindsided_Bon_Iver.mpga: MPEG ADTS, layer III, v1, 160 kbps, 44.1 kHz, JntStereo $ avconv -i Blindsided_Bon_Iver.mpga -c copy Blindsided_Bon_Iver.mp3 avconv version 0.8.4-4:0.8.4-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:50:25 with gcc 4.6.3 [mp3 @ 0x8c6e240] max_analyze_duration reached Input #0, mp3, from 'Blindsided_Bon_Iver.mpga': Duration: 00:05:29.29, start: 0.000000, bitrate: 160 kb/s Stream #0.0: Audio: mp3, 44100 Hz, stereo, s16, 160 kb/s Output #0, mp3, to 'Blindsided_Bon_Iver.mp3': Metadata: TSSE : Lavf53.21.0 Stream #0.0: Audio: libmp3lame, 44100 Hz, stereo, 160 kb/s Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding size= 6432kB time=329.30 bitrate= 160.0kbits/s video:0kB audio:6432kB global headers:0kB muxing overhead 0.002080% $ ls -l total 12868 -rw-rw-r-- 1 teuf teuf 6586129 nov. 27 22:26 Blindsided_Bon_Iver.mp3 -rw-r--r-- 1 teuf teuf 6586514 nov. 25 09:25 Blindsided_Bon_Iver.mpga $ file Blindsided_Bon_Iver.mp3 Blindsided_Bon_Iver.mp3: Audio file with ID3 version 2.4.0, contains: MPEG ADTS, layer III, v1, 32 kbps, 44.1 kHz, Stereo Amarok shows the new file has a duration of 25:27 and has a lot of silence. Am I using an incorrect option? Is it a bug in AVConv? Any ideas how to fix it?

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • How to move your Windows User Profile to another drive in Windows 8

    - by Mark
    I like to have my user folder on a different drive (D:) than my OS is (C:). Reading the following post I decided to give it a try. All went quite well, untill I found out that my Windows 8 Apps won't execute anymore (other than that I didn't noticed any problems). My apps do work, while using an account that isn't moved. In the eventviewer I've found error messages like these: App <Microsoft.MicrosoftSkyDrive> crashed with an unhandled Javascript exception. App details are as follows: Display Name:<SkyDrive>, AppUserModelId: <microsoft.microsoftskydrive_8wekyb3d8bbwe!Microsoft.MicrosoftSkyDrive> Package Identity:<microsoft.microsoftskydrive_16.4.4204.712_x64__8wekyb3d8bbwe> PID:<4452>. The details of the JavaScript exception are as follows Exception Name:<WinRT error>, Description:<Loading the state store failed. > , HTML Document Path:</modernskydrive/product/skydrive/App.html>, Source File Name:<ms-appx://microsoft.microsoftskydrive/jx/jx.js>, Source Line Number:<1>, Source Column Number:<27246>, and Stack Trace: ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:27246 localSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:51544 _initSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:54710 getApplicationStatus(boolean) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:48180 init(object) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:45583 Application(number, boolean) ms-appx://microsoft.microsoftskydrive/modernskydrive/product/skydrive/App.html:216:13 Anonymous function(object) Using ProcMon, I see a lot of access denied messages, like these: Date & Time: 12-9-2012 9:32:20 Event Class: File System Operation: CreateFile Result: ACCESS DENIED Path: D:\Users\John\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe\Settings\settings.dat TID: 2520 Duration: 0.0000149 Desired Access: Read Data/List Directory, Write Data/Add File, Read Control Disposition: OpenIf Options: Sequential Access, Synchronous IO Non-Alert, No Compression Attributes: N ShareMode: None AllocationSize: 0 Any idea how to solve this? I noticed that the app folders e.g.: D:\Users\john\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe had a different owner than the old profile folder had. Old profile folder had john as owner where my new profile folder had the Administrators group as owner. Changing this didn't help unfortunately.

    Read the article

  • Windows ACL inheritance issues for FTP server and automated tools

    - by Martin Sall
    I have set up Cerberus FTP server. By default, Cerberus FTP service runs under SYSTEM ACCOUNT. Also I have some console applications which run as scheduled tasks. They are running under a dedicated "Utilities" user account which has "Log on as batch job" permissions. These console applications take uploaded FTP files, process them and then move them to some dedicated archive folder. The problem is that my console apps are throwing Security exceptions when trying to acces the uploaded files. I tried to give the Full control permissions on the ftproot folder for my "Utilities" account and I have checked that "Replace all Child object permissions with inheritable permissions from this object" checkbox, but it affects only current files. When new files are uploaded, they again are not accessible by my "Utilities" account. I tried to go another way and put Cerberus FTP service under "Utilities" account. Then I also needed to give "Utilities" account permissions on Cerberus Data folder in ProgramData. Still no luck - after this operation, Cerberus internal SOAP web service stopped working (although everything else seems to work). I need that SOAP service to be available, so running the Cerberus FTP under "Utilities" account seems to be not an option. Unless I find out, what else do I need to set up for that "Utilities" account to stop Cerberus from complaining. I guess, Cerberus is uploading files to some temporary folder and so those files get the permissions form that folder and keep the same permissions even after moved to the ftproot. What would be the right solution for this which would grant Cerberus FTP server and the "Utilities" account minimal needed permissions to access the contents of the ftproot folder?

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >