Search Results

Search found 2187 results on 88 pages for 'combined fragment'.

Page 15/88 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Atheros AR928X wireless connection makes neighbourhood machine drop off line

    - by funicorn
    I have an Acer laptop with Atheros AR928X wireless card installed, supported by ath9k driver in the linux kernel. There are other 5 computers sharing wireless connection via a TPLink 150Mbit/s wireless router. At first I found the network is a little bit slower than it's in Windows7, which I accepted as it should be. However a very strange thing is, each time I connected to the router and downloaded stuff for a while, one of the computers running Windows7 in my local network dropped off from the router. And if I run my laptop under Windows7, everything is fine. What's even stranger is although the network becomes slower, only the certain computer drops and totally freezes in connection with the router. I'm not willing to conclude it's due to the unhealthy connection from my laptop to the router, however we have confirmed this for more than one times and there is no problem with the network when I'm running WIndows7. I'm extremely confused about what's going on. As a Linux user running Ubuntu over 5 years, I am awared that wireless driver in Linux is badly notorious of lack of stability and slow speed. But is it so bad that the unhealthy wireless connection can do damage to another computer in the same local network? I do see a lot of "Tx excessive retries" in iwconfig output. But how exactly does this happen ? Thanks for your help. I guess I have to use this answer box to show the outputs $ sudo iwconfig wlan0 IEEE 802.11bgn ESSID:"TP-LINK111" Mode:Managed Frequency:2.427 GHz Access Point: E0:05:C5:E8:A9:92 Bit Rate=121.5 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=47/70 Signal level=-63 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:2 Invalid misc:23 Missed beacon:0 To show what's wrong with the wireless connection, I ran iwconfig again within 3 minutes, during which time I hardly did anything and the network was not much busy than being nearly idle $ sudo iwconfig wlan0 IEEE 802.11bgn ESSID:"TP-LINK111" Mode:Managed Frequency:2.427 GHz Access Point: E0:05:C5:E8:A9:92 Bit Rate=121.5 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=48/70 Signal level=-62 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:9 Invalid misc:28 Missed beacon:0 You can see Tx excessive retires and Invalid misc increase very quickly. $ sudo iwlist wlan0 modu wlan0 unknown modulation information. $ sudo iwlist wlan0 channel wlan0 13 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz Current Frequency:2.427 GHz (Channel 4)

    Read the article

  • How to get wireless (Alfa) operate in full power and speed up wireless Internet?

    - by MahboobeAlam
    I am using Wireless to connect to the internet (router). My laptop has Atheros wireless (AR 242x/542x) adapter but the router is a little bit far-away from my room so I have to use an external wireless adapter i.e. Alfa (Realtek 8187) for connectivity. However, since I have started using Ubuntu I noticed that Alfa is not working in full power, internet speed in Ubuntu is much slower than in Windows on my laptop. When I am using Windows 7 the LED (bulb) on Alfa blinks as it should, but when in Ubuntu, it doesn't blink rather it is on but very dim. Connection using Atheros adapter is also the same (slow)... I have tried 4 methods (I found on the Internet) to troubleshoot this matter but no success. It seems to me that Ubuntu/Linux is not letting these wireless adapters to operate in full strength. iwconfig shows that power management is off for both. So what's the problem? Details: ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:16:1e:36:ec UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:45 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1657 errors:0 dropped:0 overruns:0 frame:0 TX packets:1657 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:241697 (241.6 KB) TX bytes:241697 (241.6 KB) wlan0 Link encap:Ethernet HWaddr 00:22:5f:9b:24:b5 inet6 addr: fe80::222:5fff:fe9b:24b5/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:715460 errors:0 dropped:0 overruns:0 frame:0 TX packets:694246 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:493539292 (493.5 MB) TX bytes:235159393 (235.1 MB) wlan1 Link encap:Ethernet HWaddr 00:c0:ca:42:14:62 inet addr:192.168.100.102 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::2c0:caff:fe42:1462/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:171053 errors:0 dropped:0 overruns:0 frame:0 TX packets:181363 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:74094659 (74.0 MB) TX bytes:59474204 (59.4 MB) iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off wlan1 IEEE 802.11bg ESSID:"Zia" Mode:Managed Frequency:2.412 GHz Access Point: 00:0D:F0:9C:A6:18 Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-31 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:204 Invalid misc:6610 Missed beacon:0

    Read the article

  • Stairway to XML: Level 5 - The XML exist() and nodes() Methods

    The XML exist() method is used, often in a WHERE clause, to check the existence of an element within an XML document or fragment. The nodes() method lets you shred an XML instance and return the information as relational data. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

  • Why is my card Unity blacklisted with all the requirements fulfilled?

    - by Oxwivi
    The following is the Unity test output: OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.30 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity supported: no As you can see, all requirements are fulfilled but my GPU is blacklisted. What can I do about it?

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • iwconfig, iw not displaying wireless information

    - by Srivatsa Kanchi
    after fresh install to 12.10, the wireless information is not shown by both iwconfig and iw. The wireless sets up successfully and able to connect This is what i get $ iwconfig wlan0 wlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off $ iw wlan0 link Not connected. $ uname -a Linux srivatsa-ThinkPad-T61 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Does Fetch as Googlebot still support their ajax-crawling proposal?

    - by Gunchars
    I spent half a day implementing the server side html generation for modal pages based on their proposal (link), but it seems like the Fetch as Googlebot functionality in Webmaster tools completely ignores the URL fragment. I've verified that the _escaped_fragment_ functionality is working on my server (example), but when I submit a URL like /#!/recipes, the Googlebot just fetches /. There aren't any recent confirmations that it's working and, honestly, it wouldn't surprise me if they just silently dropped the functionality without even editing the docs.

    Read the article

  • XML DATATYPE (series 1)

    New to SQL Server 2005, is The XML data type, which lets you store XML documents and fragments in a SQL Server database. An XML fragment is an XML instance that is missing a single top-level element. You can create columns and variables of the XML type and store XML instances in them. Note that the stored representation of XML data type instances cannot exceed 2 GB.

    Read the article

  • Mysql server crashes Innodb

    - by martin
    Today we got some DB crash. The DB is InnoDB. At firstin log: 120404 10:57:40 InnoDB: ERROR: the age of the last checkpoint is 9433732, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:58:48 InnoDB: ERROR: the age of the last checkpoint is 9825579, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:59:04 InnoDB: ERROR: the age of the last checkpoint is 13992586, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:59:20 InnoDB: ERROR: the age of the last checkpoint is 18059881, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. after manual service stop and normal PC restart : 120404 11:12:35 InnoDB: Error: page 3473451 log sequence number 105 802365904 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1103869440 120404 11:12:37 InnoDB: Error: page 0 log sequence number 105 834817616 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: Last MySQL binlog file position 0 3710603, file name .\mysql-bin.000336 InnoDB: Starting in background the rollback of uncommitted transactions 120404 11:12:38 InnoDB: Rolling back trx with id 0 1103866646, 1 rows to undo 120404 11:12:38 InnoDB: Started; log sequence number 105 796344770 120404 11:12:38 InnoDB: Error: page 2097163 log sequence number 105 803249754 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: Rolling back of trx id 0 1103866646 completed 120404 11:12:39 InnoDB: Rollback of non-prepared transactions completed 120404 11:12:39 [Note] Event Scheduler: Loaded 0 events 120404 11:12:39 [Note] wampmysqld: ready for connections. Version: '5.1.53-community' socket: '' port: 3306 MySQL Community Server (GPL) 120404 11:12:40 InnoDB: Error: page 2097162 log sequence number 105 803215859 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. 120404 11:12:40 InnoDB: Error: page 2097156 log sequence number 105 803181181 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. 120404 11:12:40 InnoDB: Error: page 2097157 log sequence number 105 803193066 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. when tried to recover data get : key_buffer_size=16777216 read_buffer_size=262144 max_used_connections=0 max_threads=151 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 133725 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... 0000000140262AFC mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AAFA1 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AB33A mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140268219 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014027DB13 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A909F mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A91B6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014025B9B0 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014022F9C6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140219979 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014009ABCF mysqld.exe!?ha_initialize_handlerton@@YAHPEAUst_plugin_int@@@Z() 000000014003308C mysqld.exe!?plugin_lock_by_name@@YAPEAUst_plugin_int@@PEAVTHD@@PEBUst_mysql_lex_string@@H@Z() 00000001400375A9 mysqld.exe!?plugin_init@@YAHPEAHPEAPEADH@Z() 000000014001DACE mysqld.exe!handle_shutdown() 000000014001E285 mysqld.exe!?win_main@@YAHHPEAPEAD@Z() 000000014001E632 mysqld.exe!?mysql_service@@YAHPEAX@Z() 00000001402EA477 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402EA545 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000007712652D kernel32.dll!BaseThreadInitThunk() 000000007725C521 ntdll.dll!RtlUserThreadStart() The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 120404 14:17:49 [Note] Plugin 'FEDERATED' is disabled. 120404 14:17:49 [Warning] option 'innodb-force-recovery': signed value 8 adjusted to 6 InnoDB: The user has set SRV_FORCE_NO_LOG_REDO on InnoDB: Skipping log redo InnoDB: Error: trying to access page number 4290979199 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 120404 14:17:52 InnoDB: Assertion failure in thread 3928 in file .\fil\fil0fil.c lin23 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 120404 14:17:52 - mysqld got exception 0xc0000005 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=262144 max_used_connections=0 max_threads=151 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 133725 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... 0000000140262AFC mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AAFA1 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AB33A mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140268219 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014027DB13 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A909F mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A91B6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014025B9B0 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014022F9C6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140219979 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014009ABCF mysqld.exe!?ha_initialize_handlerton@@YAHPEAUst_plugin_int@@@Z() 000000014003308C mysqld.exe!?plugin_lock_by_name@@YAPEAUst_plugin_int@@PEAVTHD@@PEBUst_mysql_lex_string@@H@Z() 00000001400375A9 mysqld.exe!?plugin_init@@YAHPEAHPEAPEADH@Z() 000000014001DACE mysqld.exe!handle_shutdown() 000000014001E285 mysqld.exe!?win_main@@YAHHPEAPEAD@Z() 000000014001E632 mysqld.exe!?mysql_service@@YAHPEAX@Z() 00000001402EA477 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402EA545 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000007712652D kernel32.dll!BaseThreadInitThunk() 000000007725C521 ntdll.dll!RtlUserThreadStart() The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. Any suggestion how to get DB working ????

    Read the article

  • using pf for packet filtering and ipfw's dummynet for bandwidth limiting at the same time

    - by krdx
    I would like to ask if it's fine to use pf for all packet filtering (including using altq for traffic shaping) and ipfw's dummynet for bandwidth limiting certain IPs or subnets at the same time. I am using FreeBSD 10 and I couldn't find a definitive answer to this. Googling returns such results as: It works It doesn't work Might work but it's not stable and not recommended It can work as long as you load the kernel modules in the right order It used to work but with recent FreeBSD versions it doesn't You can make it work provided you use a patch from pfsense Then there's a mention that this patch might had been merged back to FreeBSD, but I can't find it. One certain thing is that pfsense uses both firewalls simultaneously so the question is, is it possible with stock FreeBSD 10 (and where to obtain the patch if it's still necessary). For reference here's a sample of what I have for now and how I load things /etc/rc.conf ifconfig_vtnet0="inet 80.224.45.100 netmask 255.255.255.0 -rxcsum -txcsum" ifconfig_vtnet1="inet 10.20.20.1 netmask 255.255.255.0 -rxcsum -txcsum" defaultrouter="80.224.45.1" gateway_enable="YES" firewall_enable="YES" firewall_script="/etc/ipfw.rules" pf_enable="YES" pf_rules="/etc/pf.conf" /etc/pf.conf WAN1="vtnet0" LAN1="vtnet1" set skip on lo0 set block-policy return scrub on $WAN1 all fragment reassemble scrub on $LAN1 all fragment reassemble altq on $WAN1 hfsc bandwidth 30Mb queue { q_ssh, q_default } queue q_ssh bandwidth 10% priority 2 hfsc (upperlimit 99%) queue q_default bandwidth 90% priority 1 hfsc (default upperlimit 99%) nat on $WAN1 from $LAN1:network to any -> ($WAN1) block in all block out all antispoof quick for $WAN1 antispoof quick for $LAN1 pass in on $WAN1 inet proto icmp from any to $WAN1 keep state pass in on $WAN1 proto tcp from any to $WAN1 port www pass in on $WAN1 proto tcp from any to $WAN1 port ssh pass out quick on $WAN1 proto tcp from $WAN1 to any port ssh queue q_ssh keep state pass out on $WAN1 keep state pass in on $LAN1 from $LAN1:network to any keep state /etc/ipfw.rules ipfw -q -f flush ipfw -q add 65534 allow all from any to any ipfw -q pipe 1 config bw 2048KBit/s ipfw -q pipe 2 config bw 2048KBit/s ipfw -q add pipe 1 ip from any to 10.20.20.4 via vtnet1 out ipfw -q add pipe 2 ip from 10.20.20.4 to any via vtnet1 in

    Read the article

  • ServerAlias www.example.com is not recognized

    - by Tianzhou Chen
    Below is my config file: NameVirtualHost 12.34.56.78:80 < VirtualHost 12.34.56.78:80 ServerAdmin [email protected] ServerName domain1.com ServerAlias www.domain1.com DocumentRoot /srv/www/domain1.com/public_html1/ ErrorLog /srv/www/domain1.com/logs/error.log CustomLog /srv/www/domain1.com/logs/access.log combined < /VirtualHost < VirtualHost 12.34.56.78:80 ServerAdmin [email protected] ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /srv/www/domain2.com/public_html1/ ErrorLog /srv/www/domain2.com/logs/error.log CustomLog /srv/www/domain2.com/logs/access.log combined < /VirtualHost The thing is when I put www.domain1.com into browser, apache2 doesn't retrieve the web page resides in /srv/www/domain1.com/public_html1/, instead, it gets the page from the default document root defined in another file. However, if I put www.domain2.com, everything works fine. I don't see any difference between two VirtualHost config block, so I wonder what does make the difference. BTW, I haven't put any .htaccess file under their document root. Thanks for your advice! Tianzhou

    Read the article

  • Access Database using do.cmd openform where clasue - returning all values

    - by primus285
    DoCmd.OpenForm "Database Search", acFormDS, , srcLastName & "AND " & srcFirstName This is only a small sample of the where clause - there are many more terms. First, there is a set of If, Then type tings up top that set the variable srcLastName and srcFirstName to some value. These are not the problem and work just fine. The trouble is getting them to return all values (for instance if you only want to search by one, on neither(return full database list)) Thus far I have settled for (in the if then section): srcLastName = "[Lastname] =" & Chr(34) & cboLastName & Chr(34) - to search for something and srcLastName = "[Lastname] <" & Chr(34) & "Nuthin" & Chr(34) - to return everything (not equal to an absurd and mispelled database term.) The trouble is that data that is null is also not returned. If I have a null firstname, it will not show up in any search period. is there a term I can set [lastname] and [firstname] equal to that will return EVERYTHING (null, open, data, numbers, wierd stuff and otherwise) in a search an SQL form of "give me everything shes got scotty" if you will. the real issue here comes from the data entry - if I could just know that the people would enter everything 100% of the time, this code would work. but forget to enter the persons age or whatever, and it wont return that entry. So far, the only other solution I have come up with is to put a counter in each if then statement. The count will go up by one for each thing that is being searched by. Then if the count is = 1, then I can search by something like just DoCmd.OpenForm "Database Search", acFormDS, , srcLastName or DoCmd.OpenForm "Database Search", acFormDS, , srcFirstName then revert back to the DoCmd.OpenForm "Database Search", acFormDS, , srcLastName & "AND " & srcFirstName when the count is 2 or more truoble here is that it only works for one (unless I so wanted to create a custon list of 2 combined, 3 combined, 4 combined, but that is not happening)

    Read the article

  • how to programming SYSTEM for reading_comperhension question in English.

    - by michael123
    hi , i have to do some study for reading_comperhension in English. my work is ok but there is part from natural language - nlp area that i have to used . i want some help about QAsystem , how to answer the reading_comperhension automaticly .i have simple system about it that i get from this website http://www.cs.utah.edu/contest/2003/ , there is simple system in java but did not work to my i try to load the file from Remedia dataset that have the reading_comperhension story, but no result. after run this system , i have to develop by current technique such as rule based or pattern machine or combined with simple named entity.how to make that and which of them is petter to combined with the QAsys. thank you

    Read the article

  • Combining Two Models in Rails for a Form

    - by matsko
    Hey Guys. I'm very new with rails and I've been building a CMS application backend. All is going well, but I would like to know if this is possible? Basically I have two models: @page { id, name, number } @extended_page { id, page_id, description, image } The idea is that there are bunch of pages but NOT ALL pages have extended_content. In the event that there is a page with extended content then I want to be able to have a form that allows for editing both of them. In the controller: @page = Page.find(params[:id]) @extended= Extended.find(:first, :conditions = ["page_id = ?",@page.id]) @combined = ... #merge the two somehow So in the view: <%- form_for @combined do |f| % <%= f.label :name % <%= f.text_field :name % ... <%= f.label :description % <%= f.text_field :description % <%- end This way in the controller, there only has to be one model that will be updated (which will update to both). Is this possible?

    Read the article

  • How to pass an xpath into an xquery function declaration

    - by topmulch
    Hi all, I use Apache Tomcat's Exist DB as an XML database and am trying to construct a sequence by passing the following xpath, defined in FLWOR's 'let' clause: $xpath := $root/second/third into a locally defined function declaration, like so: declare function local:someFunction($uuid as xs:string?, $xpath as xs:anyAtomicType?) { let $varOne := $xpath/fourth[@uuid = $uuid]/fifthRight let $varTwo := $xpath/fourth[@uuid = $uuid]/fifthLeft let $combined := ($varOne,$varTwo) return $combined }; Of course, when entering this in the exist xquery sandbox, I get Type: xs:anyAtomicType is not defined. What should I use in place of it, or should I do this a different way? Thanks in advance for any suggestions.

    Read the article

  • How can I combine result and subquery for IN comparison (mysql)

    - by user325804
    In order for a school project i need to create the following situation within one mysql query. The situation is as such, that a child's tags and a parent's tags need to be combined into one, and compared to a site's tags, depending on a few extra simple equals to lines. For this to happen I only see the option that the result of a subquery is combined with a sub query within that query, as such: SELECT tag.*, (SELECT group_concat(t1.id, ',', (SELECT group_concat(tag.id) FROM adcampaign INNER JOIN adcampaign_tag ON adcampaign.id = adcampaign_tag.adcampaign_id INNER JOIN tag ON adcampaign_tag.tag_id = tag.id WHERE adcampaign.id = 1)) FROM ad, ad_tag, tag AS t1 WHERE ad.id = ad_tag.ad_id AND ad_tag.tag_id = t1.id AND ad.adcampaign_id = 1 AND ad.agecategory_id = 1 AND ad.adsize_id = 1 AND ad.adtype_id = 1) as tags FROM tag WHERE tag.id IN tags But the IN comparison only returns the first result because now the tags aren't a list but a concanated string. Anyone got any suggestion on this? I really need a way to combine it into one array

    Read the article

  • www disappear when using Name-based virtual host.

    - by Tianzhou Chen
    Below is my config file NameVirtualHost 12.34.56.78:80 ServerAdmin [email protected] ServerName domain1.com ServerAlias www.domain1.com DocumentRoot /srv/www/domain1.com/public_html1/ ErrorLog /srv/www/domain1.com/logs/error.log CustomLog /srv/www/domain1.com/logs/access.log combined ServerAdmin [email protected] ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /srv/www/domain2.com/public_html1/ ErrorLog /srv/www/domain2.com/logs/error.log CustomLog /srv/www/domain2.com/logs/access.log combined The thing is when I put www.domain1.com into browser, it will be automatically changed to domain1.com. However, if I put www.domain2.com, the address remains the same. I don't know why this happens. BTW, I haven't put .htaccess file under their document root. Thanks for your advice! Tianzhou

    Read the article

  • How do I get my custom requiredif attribute to prevent other attributes from firing

    - by user1757804
    I'm working on an MVC application. I've decorated a property with EqualTo found here: http://dataannotationsextensions.org/EqualTo/Create As well as a custom RequiredIf attribute as suggested here: http://blogs.msdn.com/b/simonince/archive/2011/02/04/conditional-validation-in-asp-net-mvc-3.aspx My issue is that even when the field is supposed to be required and isn't the EqualTo logic is firing. So I get error messages saying the field is required but also that the field doesn't match. If I replace the Requiredif with a regular Required only the Required message will show. What I'm trying to figure out is how the EqualTo logic is prevented when combined with the Required attribute but not prevented when combined with my custom RequiredIf. Any suggestions would be most appreciated, I've been racking my brain most of the day trying to figure out the mvc internals around Required.

    Read the article

  • Composite primary keys in N-M relation or not?

    - by BerggreenDK
    Lets say we have 3 tables (actually I have 2 at the moment, but this example might illustrate the thought better): [Person] ID: int, primary key Name: nvarchar(xx) [Group] ID: int, primary key Name: nvarchar(xx) [Role] ID: int, primary key Name: nvarchar(xx) [PersonGroupRole] Person_ID: int, PRIMARY COMPOSITE OR NOT? Group_ID: int, PRIMARY COMPOSITE OR NOT? Role_ID: int, PRIMARY COMPOSITE OR NOT? Should any of the 3 ID's in the relation PersonGroupRole be marked as PRIMARY key or should they all 3 be combined into one composite?? whats the real benefit of doing it or not? I can join anyways as far as I know, so Person JOIN PersonGroupRole JOIN Group gives me which persons are in which Groups etc. I will be using LINQ/C#/.NET on top of SQL-express and SQL-server, so if there is any reasons regarding language/SQL that might make the choice more clear, thats the platform I ask about. Looking forward to see what answers pops up, as I have thought of these primary keys/indexes many times when making combined ones.

    Read the article

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • Combining XSLT transforms

    - by Flynn1179
    Is there a way to combine two XSLT documents into a single XSLT document that does the same as transforming using the original two in sequence? i.e. Combining XSLTA and XSLTB into XSLTC such that XSLTB( XSLTA( xml )) == XSLTC( xml )? There's three reasons I'd like to be able to do this: Simplifies development; some operations need sequential transforms, and although I can generate a combined one by hand, it's a lot more difficult to maintain that two much simpler, separate transforms. Speed; one transform is in most cases hopefully faster than two. I'm currently working on a program that literally just transforms a data file in XML into an XHTML page capable of editing it using one XSLT, and a second XSLT that transforms the XHTML page back into the data file when it's saved. One test I hope to be able to do is to combine the two, and easily confirm that the 'combined' XSLT should leave the data unchanged.

    Read the article

  • (iphone) maintaining CGContextRef or CGLayerRef is a bad idea?

    - by Eugene
    Hi, I need to work with many images, and I can't hold them as UIImage in memory because they are too big. I also need to change colors of image and merge them on the fly. Creating UIImage from underlying NSData, change color, and combine them when you can't have many images on memory is fairly slow. (as far as I can get) I thought maybe I can store underlying CGLayerRef(for image that will be combined) and CGContextRef(the resulting combined image). I am new to drawing world, and not sure if CGLayerRef or CGContextRef is smaller in memory than UIImage. I recently heard that w*h image takes up w*h*4 bytes in memory. Does CGLayerRef or CGContextRef also take up that much memory? Thank you

    Read the article

  • ServerAlias not working

    - by Janis Peisenieks
    I have a VPS, that I have configured to host multiple websites with name based hosting. It is all good while only using example.com, and www.example.com. It also works with example.net, but when I try example.net, it reverts to my default site configuration, which just shows my default (empty) index.html page. Here's the default file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Here's a configuration for the example.com site: <VirtualHost *:80> ServerAdmin [email protected] ServerName example.com ServerAlias www.example.com DocumentRoot /srv/www/example.com/public_html/ ErrorLog /srv/www/example.com/logs/error.log CustomLog /srv/www/example.com/logs/access.log combined <Directory /srv/www/example.com/public_html/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here is the config for the example.net site: <VirtualHost *:80> ServerAdmin [email protected] ServerName example.net ServerAlias www.example.net DocumentRoot /srv/www/example.net/public_html/ ErrorLog /srv/www/example.net/logs/error.log CustomLog /srv/www/example.net/logs/access.log combined <Directory /srv/www/example.net/public_html/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Where could the problem be? I believe, that there is something going wrong with the ServerAlias property. Could it be because of the way the site's are built? Because example.com is a Joomla site, and example.net is a Zend Framework site. Just in case, I'll also insert the .htaccess files for example.net, since example.com has it's disabled: example.net: SetEnv APPLICATION_ENV development RewriteRule ^(browse|config).* - [L] ErrorDocument 500 /error-docs/500.shtml SetEnv CACHE_OFFSET 2678400 <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Fri, 25 Sep 2037 19:30:32 GMT" Header unset ETag FileETag None </FilesMatch> RewriteEngine On RewriteRule ^(adm|statistics) - [L] RewriteRule ^/public/(.*)$ http://example.net/$1 [R] RewriteRule ^(.*)$ public/$1 [L] Any help would be greatly appreciated! Edit So that my question is ABSOLUTELY clear: The problem is, that one site works with both www prefix as well as without it, and the second one does not. I would like to know how to enable the second site to work with www prefix as well.

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >