Search Results

Search found 6638 results on 266 pages for 'boost range'.

Page 74/266 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • "end()" iterator for back inserters?

    - by Thanatos
    For iterators such as those returned from std::back_inserter(), is there something that can be used as an "end" iterator? This seems a little nonsensical at first, but I have an API which is: template<typename InputIterator, typename OutputIterator> void foo( InputIterator input_begin, InputIterator input_end, OutputIterator output_begin, OutputIterator output_end ); foo performs some operation on the input sequence, generating an output sequence. (Who's length is known to foo but may or may not be equal to the input sequence's length.) The taking of the output_end parameter is the odd part: std::copy doesn't do this, for example, and assumes you're not going to pass it garbage. foo does it to provide range checking: if you pass a range too small, it throws an exception, in the name of defensive programming. (Instead of potentially overwriting random bits in memory.) Now, say I want to pass foo a back inserter, specifically one from a std::vector which has no limit outside of memory constraints. I still need a "end" iterator - in this case, something that will never compare equal. (Or, if I had a std::vector but with a restriction on length, perhaps it might sometimes compare equal?) How do I go about doing this? I do have the ability to change foo's API - is it better to not check the range, and instead provide an alternate means to get the required output range? (Which would be needed anyways for raw arrays, but not required for back inserters into a vector.) This would seem less robust, but I'm struggling to make the "robust" (above) work.

    Read the article

  • Python re module becomes 20 times slower when called on greater than 101 different regex

    - by Wiil
    My problem is about parsing log files and removing variable parts on each lines to be able to group them. For instance: s = re.sub(r'(?i)User [_0-9A-z]+ is ', r"User .. is ", s) s = re.sub(r'(?i)Message rejected because : (.*?) \(.+\)', r'Message rejected because : \1 (...)', s) I have about 120+ matching rules like those above. I have found no performances issues while searching successively on 100 different regex. But a huge slow down comes when applying 101 regex. Exact same behavior happens when replacing my rules set by for a in range(100): s = re.sub(r'(?i)caught here'+str(a)+':.+', r'( ... )', s) Got 20 times slower when putting range(101) instead. # range(100) % ./dashlog.py file.bz2 == Took 2.1 seconds. == # range(101) % ./dashlog.py file.bz2 == Took 47.6 seconds. == Why such thing is happening ? And is there any known workaround ? (Happens on Python 2.6.6/2.7.2 on Linux/Windows.)

    Read the article

  • How can I override list methods to do vector addition and subtraction in python?

    - by Bobble
    I originally implemented this as a wrapper class around a list, but I was annoyed by the number of operator() methods I needed to provide, so I had a go at simply subclassing list. This is my test code: class CleverList(list): def __add__(self, other): copy = self[:] for i in range(len(self)): copy[i] += other[i] return copy def __sub__(self, other): copy = self[:] for i in range(len(self)): copy[i] -= other[i] return copy def __iadd__(self, other): for i in range(len(self)): self[i] += other[i] return self def __isub__(self, other): for i in range(len(self)): self[i] -= other[i] return self a = CleverList([0, 1]) b = CleverList([3, 4]) print('CleverList does vector arith: a, b, a+b, a-b = ', a, b, a+b, a-b) c = a[:] print('clone test: e = a[:]: a, e = ', a, c) c += a print('OOPS: augmented addition: c += a: a, c = ', a, c) c -= b print('OOPS: augmented subtraction: c -= b: b, c, a = ', b, c, a) Normal addition and subtraction work in the expected manner, but there are problems with the augmented addition and subtraction. Here is the output: >>> CleverList does vector arith: a, b, a+b, a-b = [0, 1] [3, 4] [3, 5] [-3, -3] clone test: e = a[:]: a, e = [0, 1] [0, 1] OOPS: augmented addition: c += a: a, c = [0, 1] [0, 1, 0, 1] Traceback (most recent call last): File "/home/bob/Documents/Python/listTest.py", line 35, in <module> c -= b TypeError: unsupported operand type(s) for -=: 'list' and 'CleverList' >>> Is there a neat and simple way to get augmented operators working in this example?

    Read the article

  • C++ Loop - Need variable to accumulate sum

    - by user1780064
    I'm writing a program to ask the user to enter a value between 5 and 21 (inclusive). If the number entered is not in this range, it prints, "Please try again". If the number is within the range, I need to take that number, and print the sum of all the numbers from 1 to the value entered. So if the user entered "7", the sum would be "28". I successfully wrote the first loop, in the case of the number not being within the range, but cannot figure out how to run the second loop- whether to use a while, do-while, or for loop. Please advise. #include <iostream> int main () { int uservalue; int count; int sum; //Prompt user for input do { cout << "Enter a value from 5 to 21: "; cin >> uservalue; if (uservalue < 5 || uservalue > 21) cout << "Value out of range. Try again..." << endl; } while (uservalue < 5 || uservalue > 21); cout << endl; //Loop to accumulate sum for (count = 1, count < uservalue, count++;) { sum = uservalue + count; if (uservalue <= 5 || uservalue <= 21) cout << the sum is " << sum << endl; } return 0; }

    Read the article

  • How to get last Friday of month(s) using .NET

    - by Newbie
    I have a function that returns me only the fridays from a range of dates public static List<DateTime> GetDates(DateTime startDate, int weeks) { int days = weeks * 7; //Get the whole date range List<DateTime> dtFulldateRange = Enumerable.Range(-days, days).Select(i => startDate.AddDays(i)).ToList(); //Get only the fridays from the date range List<DateTime> dtOnlyFridays = (from dtFridays in dtFulldateRange where dtFridays.DayOfWeek == DayOfWeek.Friday select dtFridays).ToList(); return dtOnlyFridays; } Purpose of the function: "List of dates from the Week number specified till the StartDate i.e. If startdate is 23rd April, 2010 and the week number is 1,then the program should return the dates from 16th April, 2010 till the startddate". I am calling the function as: DateTime StartDate1 = DateTime.ParseExact("20100430", "yyyyMMdd", System.Globalization.CultureInfo.InvariantCulture); List<DateTime> dtList = Utility.GetDates(StartDate1, 4).ToList(); Now the requirement has changed a bit. I need to find out only the last Fridays of every month. The input to the function will remain same.

    Read the article

  • Copy Word format into Outlook message

    - by Jaster
    Hi, I have an outlook automation. I would like to use a Word document as template for the message content. Lets say i have some formatted text containing tables, colors, sizes, etc. Now I'd like to copy/paste this content into an Outlook message object.I'm used to the interop stuff, i just have no idea how to copy/paste this correctly. Here is some Sample Code (no cleanup): String path = @"file.docx"; String savePath = @"file.msg"; Word.Application wordApp = new Word.Application(); Word.Document currentDoc = wordApp.Documents.Open(path); Word.Range range = currentDoc.Range(0, m_CurrentDoc.Characters.Count); String wordText = range.Text; oApp = new Outlook.Application(); Outlook.NameSpace ns = oApp.GetNamespace("MAPI"); ns.Logon("MailBox"); Outlook._MailItem oMsg = oApp.CreateItem(Outlook.OlItemType.olMailItem); oMsg.To = "[email protected]"; oMsg.Body = wordtext; oMsg.SaveAs(savePath); Using Outlook/Word 2007, however the word files still mayb in 2000/2003 format (.doc). Visual Studio 2010 with .net 4.0 (should obvious due to the samplecode). Any suggestions?

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Use of for_each on map elements

    - by Antonio
    I have a map where I'd like to perform a call on every data type object member function. I yet know how to do this on any sequence but, is it possible to do it on an associative container? The closest answer I could find was this: Boost.Bind to access std::map elements in std::for_each. But I cannot use boost in my project so, is there an STL alternative that I'm missing to boost::bind? If not possible, I thought on creating a temporary sequence for pointers to the data objects and then, call for_each on it, something like this: class MyClass { public: void Method() const; } std::map<int, MyClass> Map; //... std::vector<MyClass*> Vector; std::transform(Map.begin(), Map.end(), std::back_inserter(Vector), std::mem_fun_ref(&std::map<int, MyClass>::value_type::second)); std::for_each(Vector.begin(), Vector.end(), std::mem_fun(&MyClass::Method)); It looks too obfuscated and I don't really like it. Any suggestions?

    Read the article

  • Find max integer size that a floating point type can handle without loss of precision

    - by Checkers
    Double has range more than a 64-bit integer, but its precision is less dues to its representation (since double is 64-bit as well, it can't fit more actual values). So, when representing larger integers, you start to lose precision in the integer part. #include <boost/cstdint.hpp> #include <limits> template<typename T, typename TFloat> void maxint_to_double() { T i = std::numeric_limits<T>::max(); TFloat d = i; std::cout << std::fixed << i << std::endl << d << std::endl; } int main() { maxint_to_double<int, double>(); maxint_to_double<boost::intmax_t, double>(); maxint_to_double<int, float>(); return 0; } This prints: 2147483647 2147483647.000000 9223372036854775807 9223372036854775800.000000 2147483647 2147483648.000000 Note how max int can fit into a double without loss of precision and boost::intmax_t (64-bit in this case) cannot. float can't even hold an int. Now, the question: is there a way in C++ to check if the entire range of a given integer type can fit into a loating point type without loss of precision? Preferably, it would be a compile-time check that can be used in a static assertion, and would not involve enumerating the constants the compiler should know or can compute.

    Read the article

  • What would be the safest way to store objects of classes derived from a common interface in a common

    - by Svenstaro
    I'd like to manage a bunch of objects of classes derived from a shared interface class in a common container. To illustrate the problem, let's say I'm building a game which will contain different actors. Let's call the interface IActor and derive Enemy and Civilian from it. Now, the idea is to have my game main loop be able to do this: // somewhere during init std::vector<IActor> ActorList; Enemy EvilGuy; Civilian CoolGuy; ActorList.push_back(EvilGuy); ActorList.push_back(CoolGuy); and // main loop while(!done) { BOOST_FOREACH(IActor CurrentActor, ActorList) { CurrentActor.Update(); CurrentActor.Draw(); } } ... or something along those lines. This example obviously won't work but that is pretty much the reason I'm asking here. I'd like to know: What would be the best, safest, highest-level way to manage those objects in a common heterogeneous container? I know about a variety of approaches (Boost::Any, void*, handler class with boost::shared_ptr, Boost.Pointer Container, dynamic_cast) but I can't decide which would be the way to go here. Also I'd like to emphasize that I want to stay away as far as possible from manual memory management or nested pointers. Help much appreciated :).

    Read the article

  • What is the rationale to non allow overloading of C++ conversions operator with non-member functio

    - by Vicente Botet Escriba
    C++0x has added explicit conversion operators, but they must always be defined as members of the Source class. The same applies to the assignment operator, it must be defined on the Target class. When the Source and Target classes of the needed conversion are independent of each other, neither the Source can define a conversion operator, neither the Target can define a constructor from a Source. Usually we get it by defining a specific function such as Target ConvertToTarget(Source& v); If C++0x allowed to overload conversion operator by non member functions we could for example define the conversion implicitly or explicitly between unrelated types. template < typename To, typename From operator To(const From& val); For example we could specialize the conversion from chrono::time_point to posix_time::ptime as follows template < class Clock, class Duration operator boost::posix_time::ptime( const boost::chrono::time_point& from) { using namespace boost; typedef chrono::time_point time_point_t; typedef chrono::nanoseconds duration_t; typedef duration_t::rep rep_t; rep_t d = chrono::duration_cast( from.time_since_epoch()).count(); rep_t sec = d/1000000000; rep_t nsec = d%1000000000; return posix_time::from_time_t(0)+ posix_time::seconds(static_cast(sec))+ posix_time::nanoseconds(nsec); } And use the conversion as any other conversion. So the question is: What is the rationale to non allow overloading of C++ conversions operator with non-member functions?

    Read the article

  • Lucene setboost doesn't work

    - by Keven
    Hi all, OUr team just upgrade lucene from 2.3 to 3.0 and we are confused about the setboost and getboost of document. What we want is just set a boost for each document when add them into index, then when search it the documents in the response should have different order according to the boost I set. But it seems the order is not changed at all, even the boost of each document in the search response is still 1.0. Could some one give me some hit? Following is our code: String[] a = new String[] { "schindler", "spielberg", "shawshank", "solace", "sorcerer", "stone", "soap", "salesman", "save" }; List strings = Arrays.asList(a); AutoCompleteIndex index = new Index(); IndexWriter writer = new IndexWriter(index.getDirectory(), AnalyzerFactory.createAnalyzer("en_US"), true, MaxFieldLength.LIMITED); float i = 1f; for (String string : strings) { Document doc = new Document(); Field f = new Field(AutoCompleteIndexFactory.QUERYTEXTFIELD, string, Field.Store.YES, Field.Index.NOT_ANALYZED); doc.setBoost(i); doc.add(f); writer.addDocument(doc); i += 2f; } writer.close(); IndexReader reader2 = IndexReader.open(index.getDirectory()); for (int j = 0; j < reader2.maxDoc(); j++) { if (reader2.isDeleted(j)) { continue; } Document doc = reader2.document(j); Field f = doc.getField(AutoCompleteIndexFactory.QUERYTEXTFIELD); System.out.println(f.stringValue() + ":" + f.getBoost() + ", docBoost:" + doc.getBoost()); doc.setBoost(j); }

    Read the article

  • noncopyable static const member class in template class

    - by Dukales
    I have a non-copyable (inherited from boost::noncopyable) class that I use as a custom namespace. Also, I have another class, that uses previous one, as shown here: #include <boost/utility.hpp> #include <cmath> template< typename F > struct custom_namespace : boost::noncopyable { F sqrt_of_half(F const & x) const { using std::sqrt; return sqrt(x / F(2.0L)); } // ... maybe others are not so dummy const/constexpr methods }; template< typename F > class custom_namespace_user { static ::custom_namespace< F > const custom_namespace_; public : F poisson() const { return custom_namespace_.sqrt_of_half(M_PI); } static F square_diagonal(F const & a) { return a * custom_namespace_.sqrt_of_half(1.0L); } }; template< typename F > ::custom_namespace< F > const custom_namespace_user< F >::custom_namespace_(); this code leads to the next error (even without instantiation): error: no 'const custom_namespace custom_namespace_user::custom_namespace_()' member function declared in class 'custom_namespace_user' The next way is not legitimate: template< typename F ::custom_namespace< F const custom_namespace_user< F ::custom_namespace_ = ::custom_namespace< F (); What should I do to declare this two classes (first as noncopyable static const member class of second)? Is this feaseble?

    Read the article

  • What is the difference between Inversion of Control and Dependency injection in C++?

    - by rlbond
    I've been reading recently about DI and IoC in C++. I am a little confused (even after reading related questions here on SO) and was hoping for some clarification. It seems to me that being familiar with the STL and Boost leads to use of dependency injection quite a bit. For example, let's say I made a function that found the mean of a range of numbers: template <typename Iter> double mean(Iter first, Iter last) { double sum = 0; size_t number = 0; while (first != last) { sum += *(first++); ++number; } return sum/number; }; Is this dependency injection? Inversion of control? Neither? Let's look at another example. We have a class: class Dice { public: typedef boost::mt19937 Engine; Dice(int num_dice, Engine& rng) : n_(num_dice), eng_(rng) {} int roll() { int sum = 0; for (int i = 0; i < num_dice; ++i) sum += boost::uniform_int<>(1,6)(eng_); return sum; } private: Engine& eng_; int n_; }; This seems like dependency injection. But is it inversion of control? Also, if I'm missing something, can someone help me out?

    Read the article

  • Why do you need "extern C" for in C++ callbacks to C functions?

    - by Artyom
    Hello, I find such examples in Boost code. namespace boost { namespace { extern "C" void *thread_proxy(void *f) { .... } } // anonymous void thread::thread_start(...) { ... pthread_create(something,0,&thread_proxy,something_else); ... } } // boost Why do you actually need this extern "C"? It is clear that thread_proxy function is private internal and I do not expect that it would be mangled as "thread_proxy" because I actually do not need it mangled at all. In fact in all my code that I had written and that runs on may platforms I never used extern "C" and this had worked as-as with normal functions. Why extern "C" is added? My problem is that extern "C" function pollute global name-space and they do not actually hidden as author expects. This is not duplicate! I'm not talking about mangling and external linkage. It is obvious in this code that external linkage is unwanted!

    Read the article

  • How do I enforce the order of qmake library dependencies?

    - by James Oltmans
    I'm getting a lot of errors because qmake is improperly ordering the boost libraries I'm using. Here's what .pro file looks like QT += core gui TARGET = MyTarget TEMPLATE = app CONFIG += no_keywords \ link_pkgconfig SOURCES += file1.cpp \ file2.cpp \ file3.cpp PKGCONFIG += my_package \ sqlite3 LIBS += -lsqlite3 \ -lboost_signals \ -lboost_date_time HEADERS += file1.h\ file2.h\ file3.h FORMS += mainwindow.ui RESOURCES += Resources/resources.qrc This produces the following command: g++ -Wl,-O1 -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/lib/x86_64-linux-gnu -lboost_signals -lboost_date_time -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lQtGui -lQtCore Note: mylib1 and mylib2 are statically compiled by another project, placed in /usr/local/lib with an appropriate pkg-config .pc file pointing there. The .pro file references them via my_package in PKGCONFIG. The problem is not with pkg-config's output but with Qt's ordering. Here's the .pc file: prefix=/usr/local exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: my_package Description: My component package Version: 0.1 URL: http://example.com Libs: -L${libdir} -lmylib1 -lmylib2 Cflags: -I${includedir}/my_package/ The linking stage fails spectacularly as mylib1 and mylib2 come up with a lot of undefined references to boost libraries that both the app and mylib1 and mylib2 are using. We have another build method using scons and it properly orders things for the linker. It's build command order is below. g++ -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lboost_signals -lboost_date_time -lQtGui -lQtCore Note that the principle difference is the order of the boost libs. Scons puts them at the end just before QtGui and QtCore while qmake puts them first. The other differences in the compile commands are unimportant as I have hand modified the qmake produced make file and the simple reordering fixed the problem. So my question is, how do I enforce the right order in my .pro file despite what qmake thinks they should be?

    Read the article

  • Why do you need "extern C" for C++ callbacks to C functions?

    - by Artyom
    Hello, I find such examples in Boost code. namespace boost { namespace { extern "C" void *thread_proxy(void *f) { .... } } // anonymous void thread::thread_start(...) { ... pthread_create(something,0,&thread_proxy,something_else); ... } } // boost Why do you actually need this extern "C"? It is clear that thread_proxy function is private internal and I do not expect that it would be mangled as "thread_proxy" because I actually do not need it mangled at all. In fact in all my code that I had written and that runs on may platforms I never used extern "C" and this had worked as-as with normal functions. Why extern "C" is added? My problem is that extern "C" function pollute global name-space and they do not actually hidden as author expects. This is not duplicate! I'm not talking about mangling and external linkage. It is obvious in this code that external linkage is unwanted! Answer: Calling convention of C and C++ functions are not necessary the same, so you need to create one with C calling convention. See 7.5 (p4) of C++ standard.

    Read the article

  • How to negate a predicate function using operator ! in C++?

    - by Chan
    Hi, I want to erase all the elements that do not satisfy a criterion. For example: delete all the characters in a string that are not digit. My solution using boost::is_digit worked well. struct my_is_digit { bool operator()( char c ) const { return c >= '0' && c <= '9'; } }; int main() { string s( "1a2b3c4d" ); s.erase( remove_if( s.begin(), s.end(), !boost::is_digit() ), s.end() ); s.erase( remove_if( s.begin(), s.end(), !my_is_digit() ), s.end() ); cout << s << endl; return 0; } Then I tried my own version, the compiler complained :( error C2675: unary '!' : 'my_is_digit' does not define this operator or a conversion to a type acceptable to the predefined operator I could use not1() adapter, however I still think the operator ! is more meaningful in my current context. How could I implement such a ! like boost::is_digit() ? Any idea? Thanks, Chan Nguyen

    Read the article

  • Including typedef of child in parent class

    - by Baz
    I have a class which looks something like this. I'd prefer to have the typedef of ParentMember in the Parent class and rename it Member. How might this be possible? The only way I can see is to have std::vector as a public member instead of using inheritance. typedef std::pair<std::string, boost::any> ParentMember; class Parent: public std::vector<ParentMember> { public: template <typename T> std::vector<T>& getMember(std::string& s) { MemberFinder finder(s); std::vector<ParentMember>::iterator member = std::find_if(begin(), end(), finder); boost::any& container = member->second; return boost::any_cast<std::vector<T>&>(container); } private: class Finder { ... }; };

    Read the article

  • Wireless VGA for a projector

    - by Andrew
    I am in need of a wireless VGA suitable for around 30m range and available in Australia. Something just like IOgears GUWAVKIT http://www.iogear.com/product/GUWAVKIT/ But it needs to be available in Australia. And do the 30m range.

    Read the article

  • "Address already in use" error from socket bind, when ports are not being used

    - by Ivan Novick
    I can not bind (using C or python sockets) to any port in the range: 59969-60000 Using lsof, netstat and fuser I do not see any processes using these ports. Other ports such as 59900-59968 and 60001-60009 I can bind to them. My OS: is CentOS release 5.5 (Final) 2.6.18-194.3.1.el5 There must be something missing? Anyone have any idea how to debug why this port range is not usable? Cheers, Ivan

    Read the article

  • Vmware - How do i config a host-only network

    - by nXqd
    The understanding about Host-only: I use VMware 7, Vmnet1 is the host-only adapter for host and it's IP is 192.168.209.1 . I'm really confused about this , does it connect to Vmnet 1 switch and Vmnet has DHCP also, it provieds IP range: Why it has virtual host adapter ( Vmnet 1) has IP which isn't in range while it's just an adapter in virtual network, it connects through switch Vmnet like the guest adapter Waiting for your answers , thanks in advance :)

    Read the article

  • Multiple routers, subnets, gateways etc

    - by allentown
    My current setup is: Cable modem dishes out 13 static IP's (/28), a GB switch is plugged into the cable modem, and has access to those 13 static IP's, I have about 6 "servers" in use right now. The cable modem is also a firewall, DHCP server, and 3 port 10/100 switch. I am using it as a firewall, but not currently as a DHCP server. I have plugged into the cable modem, two network cables, one which goes to the WAN port of a Linksys Dual Band Wireless 10/100/1000 router/switch. Into the linksys are a few workstations, a few printers, and some laptops connecting to wifi. I set the Linksys to use take static IP, and enabled DHCP for the workstations, printers, etc in 192.168.1.1/24. The network for the Linksys is mostly self contained, backups go to a SAN, on that network, it all happens through that switch, over GB. But I also get internet access from it as well via the cable modem using one static IP. This all works, however, I can not "see" the static IP machines when I am on the Linksys. I can get to them via ssh and other protocols, and if I want to from "outside", I open holes, like 80, 25, 587, 143, 22, etc. The second wire, from the cable modem/fireall/switch just uplinks to the managed GB switch. What are the pros and cons of this? I do not like giving up the static IP to the Linksys. I basically have a mixed network of public servers, and internal workstations. I want the public servers on public IP's because I do not want to mess with port forwarding and mappings. Is it correct also, that if someone breaches the Linksys wifi, they still would have a hard time getting to the static IP range, just by nature of the network topology? Today, just for a test, I toggled on the DHCP in the firewall/cable modem at 10.1.10.1/24 range, the Linksys is n the 192.168.1.100/24 range. At that point, all the static IP machines still had in and out access, but Linksys was unreachable. The cable modem only has 10/100 ports, so I will not plug anything but the network drop into it, which is 50Mb/10Mb. Which makes me think this could be less than ideal, as transfers from the workstation network to the server network will be bottlenecked at 100Mb when I have 1000Mb available. I may not need to solve that, if isolation is better though. I do not move a lot of data, if any, from Linsys network to server network, so for it to pretend to be remote is ok. Should I approach this any different? I could enable DHCP on the cable modem/firewall, it should still send out the statics to the GB switch, but will also be a DHCP in 10.1.10.1/24 range? I can then plug the Linksys into the GB switch, which is now picking up statics and the 10.1.10.1/24 ranges, tell the Linksys to use 10.1.10.5 or so. Now, do I disable DHCP on the Linksys, and the cable modem/firewall will pass through the statics and 10.0.10.1/24 ranges as well? Or, could I open a second DHCP pool on the Linksys? I guess doing so gives me network isolation again, but it is just the reverse of what I have now. But I get out of the bottleneck, not that the Linksys could ever really touch real GB speeds anyway, but the managed switch certainly can. This is all because 13 statics are not that many. Right now, 6 "servers", the Linksys, a managed switch, a few SSL certs, and I am running out. I do not want to waste a static IP on the managed GB switch, or the Linksys, unless it provides me some type of benefit. Final question, under my current setup, if I am on a workstation, sitting at 192.168.1.109, the Linksys, with GB, and I send a file over ssh to the static IP machine, is that literally leaving the internet, and coming back in, or does it stay local? To me it seems like: Workstation (192.168.1.109) -> Linksys DHCP -> Linksys Static IP -> Cable Modem -> Server ( and it hits the 10/100 ports on the cable modem, slowing me down. But does it round trip the network, leave and come back in, limiting me to the 50/10 internet speeds? *These are all made up numbers, I do not use default router IP's as I will one day add a VPN, and do not want collisions. I need some recommendations, do I want one big network, or two isolated ones. Printers these days need an IP, everything does, I can not get autoconf/bonjour to be reliable on most printers. but I am also not sure I want the "server" side of my operation to be polluted by the workstation side of my operation. Unless there is some magic subetting I have not learned yet, here is what I am thinking: Cable modem 10/100, has 13 static IP, publicly accessible -> Enable DHCP on the cable modem -> Cable modem plugs into managed switch -> Managed switch gets 10.1.10.1 ssh, telnet, https admin management address -> Managed switch sends static IP's to to servers -> Plug Linksys into managed switch, giving it 10.1.10.2 static internally in Linksys admin -> Linksys gets assigned 10.1.10.x as its DHCP sending range -> Local printers, workstations, iPhones etc, connect to this -> ( Do I enable DHCP or disable it on the Linksys, just define a non over lapping range, or create an entirely new DHCP at 10.1.50.0/24, I think I am back isolated again with that method too? ) Thank you for any suggestions. This is the first time I have had to deal with less than a /24, and most are larger than that, but it is just a drop to a cabinet. Otherwise, it's a router, a few repeaters, and soho stuff that is simple, with one IP. I know a few may suggest going all DHCP on the servers, and I may one day, just not now, there has been too much moving of gear for me to be interested in that, and I would want something in the Catalyst series to deal with that.

    Read the article

  • Which wireless keyboard is most secure?

    - by Axxmasterr
    I want to allow someone to use a keyboard wirelessly but I am concerned that the user passwords will be sent across the wire too. Is there a wireless keyboard that encrypts the keystream? I bought an IR keyboard setup however it lacks the range to be useful more than a few feet away from the detector. I need a range of 10 feet.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >